GPTQ-for-LLaMa-API
learn-langchain
GPTQ-for-LLaMa-API | learn-langchain | |
---|---|---|
5 | 8 | |
40 | 274 | |
- | - | |
4.7 | 6.7 | |
12 months ago | almost 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa-API
- Alternative ways for running models locally and hosting APIs
-
Can someone explain why there isn't a good interface for the oobabooga api in langchain?
oobabooga has to support way too many models, so making the whole thing unnecessarily complicated. If you have some development experience, maybe you would build your own API in a few lines of Python code. It's not hard if you build from scratch and learn along the way. I have built some example repositories for hosting GPTQ-related models. You can have a look at them. https://github.com/mzbac/GPTQ-for-LLaMa-API https://github.com/mzbac/gptq-cuda-api
-
Looking to selfhost Llama on remote server, could use some help
I ran this https://github.com/mzbac/GPTQ-for-LLaMa-API for my home server. It should be easy enough to create a Dockerfile and make it hostable via Docker.
-
How do I load a gptq LLaMA model (Vicuna) in .safetensors format?
If you have some experience with Python, you can take a look at my repo. It only has the minimal logic of how to load a GPTQ model and serve it as an API. https://github.com/mzbac/GPTQ-for-LLaMa-API
-
Just create a repository to show how to serve GPTQ model via an API
Hopefully, it will make it easier for any developer who wants to build some integration with their app. https://github.com/mzbac/GPTQ-for-LLaMa-API
learn-langchain
- Alternative to LangChain for open LLMs?
- Can someone explain why there isn't a good interface for the oobabooga api in langchain?
- Vicuna/LLaMMA Models and Langchain Tools
- Ho to run .safetensors models with langchain/huggingface pipelines?
- Local Vicuna: Building a Q/A bot over a text file with langchain, Vicuna and Sentence Transformers
-
Embeddings?
Source code: https://github.com/paolorechia/learn-langchain/tree/main/langchain_app/document
-
Is it possible to run GPTQ quantized 4bit 13B Vicuna locally on a GPU with langchain?
If not and you need to stream and cut off the text more manually, you may want to take a look at this implementation of Vicuna under LangChain: https://github.com/paolorechia/learn-langchain/
-
Creating an AI Agent with Vicuna 7B and Langchain: fetching a random Chuck Norris joke
You can find my code here: https://github.com/paolorechia/learn-langchain
What are some alternatives?
gptq-cuda-api
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
text-generation-inference - Large Language Model Text Generation Inference
gptq_for_langchain - A guide about how to use GPTQ models with langchain
llama-cpp-python - Python bindings for llama.cpp
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
vicuna-react-lora - An experiment of finetuning Vicuna with ReAct instructions
BrainChulo - Harnessing the Memory Power of the Camelids
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
andromeda-chain - Serving hugging face guidance behind a server