To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Why do you think that https://github.com/k1LoW/tbls is a good alternative to LLMLingua