GPTCache
super-gradients
GPTCache | super-gradients | |
---|---|---|
43 | 8 | |
6,514 | 4,366 | |
3.1% | 2.1% | |
7.7 | 9.5 | |
about 2 months ago | 4 days ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTCache
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer – open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
super-gradients
-
Zero-Shot Prediction Plugin for FiftyOne
Most computer vision models are trained to predict on a preset list of label classes. In object detection, for instance, many of the most popular models like YOLOv8 and YOLO-NAS are pretrained with the classes from the MS COCO dataset. If you download the weights checkpoints for these models and run prediction on your dataset, you will generate object detection bounding boxes for the 80 COCO classes.
-
Open Source Advent Fun Wraps Up!
23. SuperGradients | Github | tutorial
- FLaNK Stack Weekly 06 Nov 2023
-
Autodistill: A new way to create CV models
And the target models include: * YOLOv8 (You Only Look Once) * YOLO-NAS * YOLOv5 * and DETR
- FLaNK Stack for 15 May 2023
- GitHub - Deci-AI/super-gradients: Easily train or fine-tune SOTA co...GitHub - Deci-AI/super-gradients: Easily train or fine-tune SOTA co...
- Meet YOLO-NAS: An Open-Sourced YOLO-based Architecture Redefining State-of-the-Art in Object Detection
- FLiPN-FLaNK Stack Weekly May 8 2023
What are some alternatives?
guardrails - Adding guardrails to large language models.
ultralytics - NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
gorilla-cli - LLMs for your CLI
SegGradCAM - SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
highstorm - Open Source Event Monitoring
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
openvino_notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
gpt4free - The official gpt4free repository | various collection of powerful language models
pyvideotrans - Translate the video from one language to another and add dubbing. 将视频从一种语言翻译为另一种语言,并添加配音
sheetgpt - ChatGPT integration with Google Sheets
Detic - Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".