langchain
GPTCache
langchain | GPTCache | |
---|---|---|
1 | 43 | |
474 | 6,595 | |
- | 2.9% | |
9.5 | 7.7 | |
3 days ago | 2 months ago | |
Elixir | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langchain
-
Building Reliable Systems Out of Unreliable Agents
While I'm at at, this Elixir library is great as well: https://github.com/brainlid/langchain
GPTCache
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
What are some alternatives?
instructor_ex - Structured outputs for LLMs in Elixir
guardrails - Adding guardrails to large language models.
InternGPT - InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
gorilla-cli - LLMs for your CLI
botpress - The open-source hub to build & deploy GPT/LLM Agents ⚡️
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
chatgpt-ui - ChatGPT UI with auth, OpenAI, Claude, Gemini support, written in Elixir + LiveView
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
burr - Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, persist, and execute on your own infrastructure.
gpt4free - The official gpt4free repository | various collection of powerful language models
sheetgpt - ChatGPT integration with Google Sheets
openai-gpt4 - decentralising the Ai Industry, free gpt-4/3.5 scripts through several reverse engineered api's ( poe.com, phind.com, chat.openai.com, phind.com, writesonic.com, sqlchat.ai, t3nsor.com, you.com etc...) [Moved to: https://github.com/xtekky/gpt4free]