AutoGPTQ
SpQR
AutoGPTQ | SpQR | |
---|---|---|
19 | 4 | |
3,806 | 512 | |
5.0% | - | |
9.3 | 6.7 | |
4 days ago | 4 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AutoGPTQ
- Setting up LLAMA2 70B Chat locally
- Experience of setting up LLAMA 2 70B Chat locally
-
GPT-4 Details Leaked
Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .
If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
-
Loader Types
AutoGPTQ: an attempt at standardizing GPTQ-for-LLaMa and turning it into a library that is easier to install and use, and that supports more models. https://github.com/PanQiWei/AutoGPTQ
- WizardLM-33B-V1.0-Uncensored
-
Any help converting an interesting .bin model to 4 bit 128g GPTQ? Bloke?
Just use the script: https://github.com/PanQiWei/AutoGPTQ/blob/main/examples/quantization/quant_with_alpaca.py
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
In the wild, people tend to use GTPQ quantization for pure GPU inference: https://github.com/PanQiWei/AutoGPTQ
And ggml's quant for CPU inference with some offload, which just got updated to a more GPTQ-like method days ago: https://github.com/ggerganov/llama.cpp/pull/1684
Some other runtimes like Apache TVM also have their own quant implementations: https://github.com/mlc-ai/mlc-llm
For training, 4-bit bitsandbytes is SOTA, as far as I know.
TBH I'm not sure why this November paper is being linked. Few are running 8 bit models when they could fit a better 3-5 bit model in the same memory pool.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
Instead of integrating GPTQ-for-Lllama, use AutoGPTQ instead.
- AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm
SpQR
- SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
posted here https://news.ycombinator.com/item?id=36216126 but no traction
The paper is entitled "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression" and https://twitter.com/Tim_Dettmers/status/1666076553665744896 is a nice summary
Code here: https://github.com/Vahe1994/SpQR (https://news.ycombinator.com/item?id=36219128 but no traction )
- SpQR: Near-Lossless LLM Weight Compression
-
Yet another quantization method: SpQR by Tim Dettmers et al.
Github: https://github.com/Vahe1994/SpQR
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
llama.cpp - LLM inference in C/C++
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
self-refine - LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
ray-llm - RayLLM - LLMs on Ray
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
catai - UI for 🦙model . Run AI assistant locally ✨
gptq-cuda-api