gptq VS OmniQuant

Compare gptq vs OmniQuant and see what are their differences.

gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers". (by IST-DASLab)

OmniQuant

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs. (by OpenGVLab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
gptq OmniQuant
8 4
1,725 572
3.8% 8.2%
4.4 7.7
about 2 months ago about 2 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

gptq

Posts with mentions or reviews of gptq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-12.
  • Do large language models need all those layers?
    1 project | news.ycombinator.com | 15 Dec 2023
    I think it's not that LLMs have redundant layers in general - it's a specific problem with OPT-66B, not anything else.

    An 2022 paper "Scaling Language Models: Methods, Analysis & Insights from Training Gopher" (http://arxiv.org/abs/2112.11446) has captured it well on page 103, Appendix G:

    > The general finding is that whilst compressing models for a particular application has seen success, it is difficult to compress them for the objective of language modelling over a diverse corpus.

    The appendix G explores various techniques like pruning and distillation but found that neither method was an efficient way to obtain better loss at lower number of parameters.

    So why does pruning work for OPT-66B in particular? I'm not sure but there are evidence that OPT-66B is an outlier: one evidence is in the GPTQ paper ("GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", https://arxiv.org/abs/2210.17323) that mentions in its footnote on its 7th page:

    > [2] Upon closer inspection of the OPT-66B model, it appears that this is correlated with the fact that this trained

  • 70B Llama 2 at 35tokens/second on 4090
    6 projects | news.ycombinator.com | 12 Sep 2023
    Can anyone provide any additional details on the EXL2[0]/GPTQ[1] quantisation, which seems to be the main reason for a speedup in this model?

    I had a quick look at the paper which is _reasonably_ clear, but if anyone else has any other sources that are easy to understand, or a quick explanation to give more insight into it, I'd appreciate it.

    [0] https://github.com/turboderp/exllamav2#exl2-quantization

    [1] https://arxiv.org/abs/2210.17323

  • OpenAssistant's RLHF Models
    1 project | /r/LocalLLaMA | 2 Jun 2023
    GPTQ is better than GGML quantization, because it reoptimizes the weights to compensate for the lost accuracy. With 4 bit and groupsize 128 it can approximate, the FP16 performance pretty good. GGML just does round to nearest (RTN) without reoptimizing the weights against some dataset (generally the C4 dataset, as per default GPTQ-for-LLaMA configuration). But llama.cpp could probably implement such a method themselves, the paper is freely available: https://arxiv.org/abs/2210.17323
  • The tiny corp raised $5.1M
    3 projects | news.ycombinator.com | 25 May 2023
    When you click on the strip link to preorder the tinybox, it is advertised as a box running LLaMA 65B FP16 for $15000.

    I can run LLaMA 65B GPTQ4b on my $2300 PC (used parts, Dual RTX 3090), and according to the GPTQ paper(§) the quality of the model will not suffer much at all by the quantization.

    (§) https://arxiv.org/abs/2210.17323

  • Newbie doesn't know what he's doing...
    1 project | /r/Oobabooga | 22 May 2023
  • Seeking clarification about LLM's, Tools, etc.. for developers.
    2 projects | /r/LocalLLaMA | 19 May 2023
    GPTQ is another quantization method, that works only for transformer model architectures. It quantizes the stored model weights in a non-linear fashion, and ends up having better quality compared to just linear quantization into a smaller data type. GPTQ has a triton and a cuda branch, which was tricky initially, as it lead to a lot of confusion and non-compatibility especially on windows.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.

    The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.

  • #StandAgainstFloats
    2 projects | /r/ProgrammerHumor | 13 May 2023
    This is the one everybody's using to quantize language models. It includes a link to the paper explaining their algorithm.

OmniQuant

Posts with mentions or reviews of OmniQuant. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-16.
  • Run Mistral 7B on M1 Mac
    6 projects | news.ycombinator.com | 16 Dec 2023
    Not on iOS. On macOS, I personally think WizardLM 13B v1.2 is a very strong model and keep hearing good things about it from users on our discord and in support emails. Now that there's OmniQuant support for Mixtral models[1], I'm plan to add support for Mixtral-8x7B-Instruct-v0.1 in the next version of the macOS app, which in my tests, looks like a very good all purpose model that's also pretty good at coding. It's pretty memory hungry (~41GB of RAM), but that's the price to pay for an uncompromising implementation. Existing quantized implementations quantize the MoE gates, leading to a significant drop in perplexity when compared with results from fp16 inference.

    [1]: https://github.com/OpenGVLab/OmniQuant/commit/798467

  • OmniQuant of Falcon-180B has been released!
    1 project | /r/LocalLLaMA | 15 Sep 2023
  • 70B Llama 2 at 35tokens/second on 4090
    6 projects | news.ycombinator.com | 12 Sep 2023
    I think OmniQuant is notable because it shifts the bend of the curve to 3-bit. While < 3-bit still ramps up, it's notable in that it's usable and doesn't go asymptotic: https://github.com/OpenGVLab/OmniQuant/blob/main/imgs/weight...

    What EXL2 seems to bring to the table is that you can target an arbitrary quantize bit-weight (eg, if you're a bit short on VRAM, you don't need to go from 4->3 or 3->2, but can specify say 3.75bwp). You have some control w/ other schemes by setting group size, or with k-quants, but EXL2 is definitely allows you to be finer grained. I haven't gotten a chance to sit down with EXL2 yet, but if no one else does it, it's on my todo-list to be able to do 1:1 perplexity and standard benchmark evals on all the various new quantization methods, just as a matter of curiosity.

  • OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
    1 project | /r/LocalLLaMA | 30 Aug 2023

What are some alternatives?

When comparing gptq and OmniQuant you can also consider the following projects:

triton - Development repository for the Triton language and compiler

exllamav2 - A fast inference library for running LLMs locally on modern consumer-class GPUs

coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices

Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

llamafile - Distribute and run LLMs with a single file.

llama.cpp - LLM inference in C/C++

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

HIPIFY - HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY]

openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment

llama.cpp

sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".