A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Why do you think that https://github.com/qwopqwop200/GPTQ-for-LLaMa is a good alternative to exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Why do you think that https://github.com/qwopqwop200/GPTQ-for-LLaMa is a good alternative to exllama