chatgpt-memory
Allows to scale the ChatGPT API to multiple simultaneous sessions with infinite contextual and adaptive memory powered by GPT and Redis datastore. (by continuum-llms)
GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index. (by zilliztech)
chatgpt-memory | GPTCache | |
---|---|---|
2 | 43 | |
497 | 6,446 | |
- | 2.1% | |
10.0 | 7.7 | |
7 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatgpt-memory
Posts with mentions or reviews of chatgpt-memory.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-22.
-
ChatGPT like program that is trainable?
These are a couple open source options you can look into: - https://github.com/pashpashpash/vault-ai - https://github.com/continuum-llms/chatgpt-memory
-
Writer here - anyone not impressed by ChatGPT 4?
You could get an API and use something like this.
GPTCache
Posts with mentions or reviews of GPTCache.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-05.
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?