A fast inference library for running LLMs locally on modern consumer-class GPUs
Why do you think that https://github.com/FasterDecoding/Medusa is a good alternative to exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
Why do you think that https://github.com/FasterDecoding/Medusa is a good alternative to exllamav2