Fork of Facebooks LLaMa model to run on CPU
Why do you think that https://github.com/modular-ml/wrapyfi-examples_llama is a good alternative to llama-cpu
Fork of Facebooks LLaMa model to run on CPU
Why do you think that https://github.com/modular-ml/wrapyfi-examples_llama is a good alternative to llama-cpu