Quantized inference code for LLaMA models
Why do you think that https://github.com/markasoftware/llama-cpu is a good alternative to llama-int8
Quantized inference code for LLaMA models
Why do you think that https://github.com/markasoftware/llama-cpu is a good alternative to llama-int8