Quantized inference code for LLaMA models
Why do you think that https://github.com/modular-ml/wrapyfi-examples_llama is a good alternative to llama-int8
Quantized inference code for LLaMA models
Why do you think that https://github.com/modular-ml/wrapyfi-examples_llama is a good alternative to llama-int8