pautobot
GPTCache
pautobot | GPTCache | |
---|---|---|
4 | 43 | |
105 | 6,595 | |
- | 2.9% | |
10.0 | 7.7 | |
12 months ago | 2 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pautobot
-
How do I set up my own GPT search assistant?
https://github.com/nrl-ai/pautobot/
-
Local GPT or API into ChatGpt
That led me to - https://github.com/nrl-ai/pautobot - which I installed on my laptop. It is a bit slow given my laptop is older, but it works well enough for me to buy into the concept. It really does make a difference to be able to search on not just exact matches but also phrases in 500+ documents.
- How to get started?
-
PAutoBot - Private Auto Robot was released with the first feature: Ask on your local documents
Our PAutobot - Private Auto Assistant was released with the first feature: Ask questions on your documents. The next features are coming!! Any suggestions on the next plan? Star it for updates: https://github.com/nrl-ai/pautobot or Open issue for any suggestions: https://github.com/nrl-ai/pautobot/issues.
GPTCache
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
What are some alternatives?
Chatbase-Alternative - ChatGPT for every website.Instantly answer your visitors' questions with a personalized chatbot trained on your website content. Alternative to Chatbase, SiteGPT, Dante AI
guardrails - Adding guardrails to large language models.
private-gpt - Deploy smart and secure conversational agents for your employees, using Azure. Able to use both private and public data.
gorilla-cli - LLMs for your CLI
Auto-GPT - Auto-GPT + CLIP vision for stable v0.3.1
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
gpt4free - The official gpt4free repository | various collection of powerful language models
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
sheetgpt - ChatGPT integration with Google Sheets
openai-gpt4 - decentralising the Ai Industry, free gpt-4/3.5 scripts through several reverse engineered api's ( poe.com, phind.com, chat.openai.com, phind.com, writesonic.com, sqlchat.ai, t3nsor.com, you.com etc...) [Moved to: https://github.com/xtekky/gpt4free]