StableVideo
GPTCache
StableVideo | GPTCache | |
---|---|---|
7 | 43 | |
1,327 | 6,446 | |
- | 2.1% | |
6.4 | 7.7 | |
8 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StableVideo
-
MagicEdit: High-Fidelity Temporally Coherent Video Editing
Looks like its building on the same concepts as stable video.
https://github.com/rese1f/StableVideo
- StableVideo: Text-driven Consistency-aware Diffusion Video Editing
-
StableVideo: Text-Driven Consistency-Aware Diffusion Video Editing
You can see the source of the github pages site on github: https://github.com/rese1f/StableVideo/tree/web
It seems they forked from somebody else and then changed the content to match their paper.
-
StableVideo
Code: https://github.com/rese1f/StableVideo
GPTCache
-
Ask HN: What are the drawbacks of caching LLM responses?
Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
-
Open Source Advent Fun Wraps Up!
21. GPTCache | Github | tutorial
- Semantic Cache
-
Show HN: Danswer β open-source question answering across all your docs
Check this out. Built on a vector database (https://github.com/milvus-io/milvus) and a semantic cache (https://github.com/zilliztech/GPTCache)
https://osschat.io/
- GPTCache
-
Ask HN: Is LLM Caching Necessary?
With the proliferation of large models, an increasing number of enterprises and individual developers are now developing applications based on these models. As such, it is worth considering whether large model caching is necessary during the development process.
Our project: https://github.com/zilliztech/GPTCache
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
Maybe [GPTCache](https://github.com/zilliztech/GPTCache) can make it more attractive, because similar problems can be less expensive, and can also be responded to faster. Of course, the specific configuration needs to be based on real usage scenarios.
- Limited budget or machine resources, how to achieve a decent LLM experience?
What are some alternatives?
MotionDiffuse - MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
guardrails - Adding guardrails to large language models.
DiffSinger - DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code
gorilla-cli - LLMs for your CLI
PaddleNLP - π Easy-to-use and powerful NLP and LLM library with π€ Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including πText Classification, π Neural Search, β Question Answering, βΉοΈ Information Extraction, π Document Intelligence, π Sentiment Analysis etc.
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
ReVersion - ReVersion: Diffusion-Based Relation Inversion from Images
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
ReuseAndDiffuse - Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
gpt4free - The official gpt4free repository | various collection of powerful language models
LAMP - Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
sheetgpt - ChatGPT integration with Google Sheets