Self-extracting GPT prompts for ~70% token savings
Why do you think that https://github.com/run-llama/llama_index is a good alternative to compress-gpt
Self-extracting GPT prompts for ~70% token savings
Why do you think that https://github.com/run-llama/llama_index is a good alternative to compress-gpt