AutoGPTQ
AutoGPTQ | gptq-cuda-api | |
---|---|---|
19 | 2 | |
3,806 | 19 | |
5.0% | - | |
9.3 | 3.9 | |
4 days ago | 11 months ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AutoGPTQ
- Setting up LLAMA2 70B Chat locally
- Experience of setting up LLAMA 2 70B Chat locally
-
GPT-4 Details Leaked
Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .
If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
-
Loader Types
AutoGPTQ: an attempt at standardizing GPTQ-for-LLaMa and turning it into a library that is easier to install and use, and that supports more models. https://github.com/PanQiWei/AutoGPTQ
- WizardLM-33B-V1.0-Uncensored
-
Any help converting an interesting .bin model to 4 bit 128g GPTQ? Bloke?
Just use the script: https://github.com/PanQiWei/AutoGPTQ/blob/main/examples/quantization/quant_with_alpaca.py
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
In the wild, people tend to use GTPQ quantization for pure GPU inference: https://github.com/PanQiWei/AutoGPTQ
And ggml's quant for CPU inference with some offload, which just got updated to a more GPTQ-like method days ago: https://github.com/ggerganov/llama.cpp/pull/1684
Some other runtimes like Apache TVM also have their own quant implementations: https://github.com/mlc-ai/mlc-llm
For training, 4-bit bitsandbytes is SOTA, as far as I know.
TBH I'm not sure why this November paper is being linked. Few are running 8 bit models when they could fit a better 3-5 bit model in the same memory pool.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
Instead of integrating GPTQ-for-Lllama, use AutoGPTQ instead.
- AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm
gptq-cuda-api
-
Example of how to run GPTQ models on multiple GPUs
Here is the repository with minimal code required to run GPTQ on multiple GPUs https://github.com/mzbac/gptq-cuda-api
-
Can someone explain why there isn't a good interface for the oobabooga api in langchain?
oobabooga has to support way too many models, so making the whole thing unnecessarily complicated. If you have some development experience, maybe you would build your own API in a few lines of Python code. It's not hard if you build from scratch and learn along the way. I have built some example repositories for hosting GPTQ-related models. You can have a look at them. https://github.com/mzbac/GPTQ-for-LLaMa-API https://github.com/mzbac/gptq-cuda-api
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llama.cpp - LLM inference in C/C++
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GPTQ-for-LLaMa-API - Provide a way to use the GPT-QLLama model as an API
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
guidance - A guidance language for controlling large language models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
self-refine - LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
learn-langchain