DB-GPT
GPTCache
DB-GPT | GPTCache | |
---|---|---|
10 | 43 | |
11,055 | 6,430 | |
5.0% | 1.8% | |
9.9 | 7.7 | |
4 days ago | 26 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DB-GPT
-
(2/2) May 2023
Interact your data and environment using the local GPT (https://github.com/csunny/DB-GPT)
- FLaNK Stack Weekly 29 may 2023
- GitHub - csunny/DB-GPT: Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security
- DB-GPT - OSS to interact with your local LLM
- Show HN: DB-GPT, an LLM tool for database
GPTCache
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
guardrails - Adding guardrails to large language models.
gorilla - Gorilla: An API store for LLMs
gorilla-cli - LLMs for your CLI
zamm - Experimental AI chat app
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
Propan - Propan is a powerful and easy-to-use Python framework for building event-driven applications that interact with any MQ Broker
gpt4free - The official gpt4free repository | various collection of powerful language models
jj - JSON Stream Editor (command line utility)
sheetgpt - ChatGPT integration with Google Sheets
jikkou - The Open source Resource as Code framework for Apache Kafka