Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
GPTCache Alternatives
Similar projects and alternatives to GPTCache
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
qdrant
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
-
Ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
-
shell_gpt
A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
megabots
🤖 State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
-
DB-GPT
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
-
temporian
Temporian is an open-source Python library for preprocessing ⚡ and feature engineering 🛠temporal data 📈 for machine learning applications 🤖
-
vectorflow
VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of your choice. (by dgarnitz)
-
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
GPTCache reviews and mentions
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
-
A note from our sponsor - InfluxDB
www.influxdata.com | 10 May 2024
Stats
zilliztech/GPTCache is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of GPTCache is Python.
Sponsored