Fork of Facebooks LLaMa model to run on CPU
Why do you think that https://github.com/tloen/llama-int8 is a good alternative to llama-cpu
Fork of Facebooks LLaMa model to run on CPU
Why do you think that https://github.com/tloen/llama-int8 is a good alternative to llama-cpu