GPTQ-for-LLaMa-API
Provide a way to use the GPT-QLLama model as an API (by mzbac)
GPTQ-for-LLaMa-API | gptq-cuda-api | |
---|---|---|
5 | 2 | |
40 | 19 | |
- | - | |
4.7 | 3.9 | |
12 months ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa-API
Posts with mentions or reviews of GPTQ-for-LLaMa-API.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-25.
- Alternative ways for running models locally and hosting APIs
-
Can someone explain why there isn't a good interface for the oobabooga api in langchain?
oobabooga has to support way too many models, so making the whole thing unnecessarily complicated. If you have some development experience, maybe you would build your own API in a few lines of Python code. It's not hard if you build from scratch and learn along the way. I have built some example repositories for hosting GPTQ-related models. You can have a look at them. https://github.com/mzbac/GPTQ-for-LLaMa-API https://github.com/mzbac/gptq-cuda-api
-
Looking to selfhost Llama on remote server, could use some help
I ran this https://github.com/mzbac/GPTQ-for-LLaMa-API for my home server. It should be easy enough to create a Dockerfile and make it hostable via Docker.
-
How do I load a gptq LLaMA model (Vicuna) in .safetensors format?
If you have some experience with Python, you can take a look at my repo. It only has the minimal logic of how to load a GPTQ model and serve it as an API. https://github.com/mzbac/GPTQ-for-LLaMa-API
-
Just create a repository to show how to serve GPTQ model via an API
Hopefully, it will make it easier for any developer who wants to build some integration with their app. https://github.com/mzbac/GPTQ-for-LLaMa-API
gptq-cuda-api
Posts with mentions or reviews of gptq-cuda-api.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-25.
-
Example of how to run GPTQ models on multiple GPUs
Here is the repository with minimal code required to run GPTQ on multiple GPUs https://github.com/mzbac/gptq-cuda-api
-
Can someone explain why there isn't a good interface for the oobabooga api in langchain?
oobabooga has to support way too many models, so making the whole thing unnecessarily complicated. If you have some development experience, maybe you would build your own API in a few lines of Python code. It's not hard if you build from scratch and learn along the way. I have built some example repositories for hosting GPTQ-related models. You can have a look at them. https://github.com/mzbac/GPTQ-for-LLaMa-API https://github.com/mzbac/gptq-cuda-api
What are some alternatives?
When comparing GPTQ-for-LLaMa-API and gptq-cuda-api you can also consider the following projects:
text-generation-inference - Large Language Model Text Generation Inference
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
llama-cpp-python - Python bindings for llama.cpp
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
learn-langchain
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
guidance - A guidance language for controlling large language models.
GPTQ-for-LLaMa-API vs text-generation-inference
gptq-cuda-api vs AutoGPTQ
GPTQ-for-LLaMa-API vs llama-cpp-python
gptq-cuda-api vs koboldcpp
GPTQ-for-LLaMa-API vs learn-langchain
gptq-cuda-api vs AgentOoba
GPTQ-for-LLaMa-API vs text-generation-webui
gptq-cuda-api vs guidance
GPTQ-for-LLaMa-API vs AgentOoba
gptq-cuda-api vs text-generation-webui
GPTQ-for-LLaMa-API vs koboldcpp
gptq-cuda-api vs learn-langchain