learn-langchain
FastChat
learn-langchain | FastChat | |
---|---|---|
8 | 83 | |
274 | 34,708 | |
- | 4.3% | |
6.7 | 9.6 | |
almost 1 year ago | about 11 hours ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
learn-langchain
- Alternative to LangChain for open LLMs?
- Can someone explain why there isn't a good interface for the oobabooga api in langchain?
- Vicuna/LLaMMA Models and Langchain Tools
- Ho to run .safetensors models with langchain/huggingface pipelines?
- Local Vicuna: Building a Q/A bot over a text file with langchain, Vicuna and Sentence Transformers
-
Embeddings?
Source code: https://github.com/paolorechia/learn-langchain/tree/main/langchain_app/document
-
Is it possible to run GPTQ quantized 4bit 13B Vicuna locally on a GPU with langchain?
If not and you need to stream and cut off the text more manually, you may want to take a look at this implementation of Vicuna under LangChain: https://github.com/paolorechia/learn-langchain/
-
Creating an AI Agent with Vicuna 7B and Langchain: fetching a random Chuck Norris joke
You can find my code here: https://github.com/paolorechia/learn-langchain
FastChat
-
GPT4.5 or GPT5 being tested on LMSYS?
gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".
When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"
One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked
[0] https://github.com/lm-sys/FastChat/issues/2527
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio – Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
[1] https://github.com/lm-sys/FastChat
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
Check how to start with FastChat. Support FastChat on GitHub ⭐
-
Show HN: ChatAPI – PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
What are some alternatives?
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gptq_for_langchain - A guide about how to use GPTQ models with langchain
llama.cpp - LLM inference in C/C++
vicuna-react-lora - An experiment of finetuning Vicuna with ReAct instructions
gpt4all - gpt4all: run open-source LLMs anywhere
GPTQ-for-LLaMa-API - Provide a way to use the GPT-QLLama model as an API
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
BrainChulo - Harnessing the Memory Power of the Camelids
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
andromeda-chain - Serving hugging face guidance behind a server
llama-cpp-python - Python bindings for llama.cpp