gptme
A CLI and web UI to interact with LLMs in a Chat-style interface, with code execution capabilities and other tools. (by ErikBjare)
GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index. (by zilliztech)
gptme | GPTCache | |
---|---|---|
2 | 43 | |
243 | 6,550 | |
- | 2.2% | |
9.4 | 7.7 | |
12 days ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gptme
Posts with mentions or reviews of gptme.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-11-01.
-
Fine-tuning Local LLMs for "Code Interpreter" use: Seeking Experience and Insights
I'm building a little code+chat CLI called gptme, that aims to leverage the capabilities of local LLMs to mimic the functionalities offered by OpenAI's "Advanced Data Analysis" (formerly known as "Code Interpreter"). It is similar in spirit to the more popular open-interpreter, which some of you might have heard of.
- Show HN: GPTMe, a CLI to interact with LLMs, able to execute code locally
GPTCache
Posts with mentions or reviews of GPTCache.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-05.
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?