KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Why do you think that https://github.com/hiyouga/LLaMA-Factory is a good alternative to KVQuant
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Why do you think that https://github.com/hiyouga/LLaMA-Factory is a good alternative to KVQuant