4 bits quantization of LLaMa using GPTQ
Why do you think that https://github.com/turboderp/exllama is a good alternative to GPTQ-for-LLaMa
4 bits quantization of LLaMa using GPTQ
Why do you think that https://github.com/turboderp/exllama is a good alternative to GPTQ-for-LLaMa