compress-gpt
Self-extracting GPT prompts for ~70% token savings (by yasyf)
Text_summary
By gnuconcepts
compress-gpt | Text_summary | |
---|---|---|
2 | 1 | |
218 | 3 | |
- | - | |
4.0 | 2.6 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
- | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
compress-gpt
Posts with mentions or reviews of compress-gpt.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-01.
-
Ask HN: Have you seen an effective method for compressing GPT prompts?
I'm interested in something that can take frequently used system prompts of mine and compress them in a way that still gets the same results with different user messages.
Does this exist?
Things I've seen:
https://github.com/yasyf/compress-gpt
https://news.ycombinator.com/item?id=35626433
https://news.ycombinator.com/item?id=35488291
- Ask HN: Bypassing GPT-4 8k tokens limit
Text_summary
Posts with mentions or reviews of Text_summary.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-01.
-
Ask HN: Bypassing GPT-4 8k tokens limit
you could use ntlk to summarize the text before you send it GPT-4.
I have a script that uses NLTK to do this. It needs cleaned up but it could be a starting point.
https://github.com/gnuconcepts/Text_summary
What are some alternatives?
When comparing compress-gpt and Text_summary you can also consider the following projects:
flash-attention - Fast and memory-efficient exact attention
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
langchain - 🦜🔗 Build context-aware reasoning applications
llama_index - LlamaIndex is a data framework for your LLM applications