exllamav2 VS gptq

Compare exllamav2 vs gptq and see what are their differences.

exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs (by turboderp)

gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers". (by IST-DASLab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
exllamav2 gptq
17 8
3,010 1,725
- 3.8%
9.8 4.4
3 days ago about 2 months ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

exllamav2

Posts with mentions or reviews of exllamav2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Running Llama3 Locally
    1 project | news.ycombinator.com | 20 Apr 2024
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • What do you use to run your models?
    14 projects | /r/LocalLLaMA | 7 Dec 2023
    Sorry, I'm somewhat familiar with this term (I've seen it as a model loader in Oobabooga), but still not following the correlation here. Are you saying I should instead be using this project in lieu of llama.cpp? Or are you saying that there is, perhaps, an exllamav2 "extension" or similar within llama.cpp that I can use?
  • I just started having problems with the colab again. I get errors and it just stops. Help?
    1 project | /r/SillyTavernAI | 5 Dec 2023
    EDIT: I reported the bug to the exllamav2 Github. It's actually already fixed, just not on any current built release.
  • Yi-34B-200K works on a single 3090 with 47K context/4bpw
    1 project | /r/LocalLLaMA | 8 Nov 2023
    install exllamav2 from git with pip install git+https://github.com/turboderp/exllamav2.git. Make sure you have flash attention 2 as well.
  • Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance
    2 projects | /r/LocalLLaMA | 2 Nov 2023
    Recent releases for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
  • Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
    9 projects | news.ycombinator.com | 31 Oct 2023
    Without batching, I was actually thinking that's kind of modest.

    ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:

    https://github.com/turboderp/exllamav2#performance

    I didn't test codellama, but the 3090 TI figures are in the ballpark of my generation speed on a 3090.

  • Guide for Llama2 70b model merging and exllama2 quantization
    2 projects | /r/LocalLLaMA | 24 Oct 2023
    First, you need the convert.py script from turboderp's Exllama2 repo. You can read all about the convert.py arguments here.
  • LLM Falcon 180B Needs 720GB RAM to Run
    1 project | news.ycombinator.com | 24 Sep 2023
    > brute aggressive quantization

    Cutting edge quantization like ExLlama's EX2 is far from brute force: https://github.com/turboderp/exllamav2#exl2-quantization

    > The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. Moreover, it's possible to apply multiple quantization levels to each linear layer, producing something akin to sparse quantization wherein more important weights (columns) are quantized with more bits. The same remapping trick that lets ExLlama work efficiently with act-order models allows this mixing of formats to happen with little to no impact on performance. Parameter selection is done automatically by quantizing each matrix multiple times, measuring the quantization error (with respect to the chosen calibration data) for each of a number of possible settings, per layer. Finally, a combination is chosen that minimizes the maximum quantization error over the entire model while meeting a target average bitrate.

    Llama.cpp is also working on a feature that let's a small model "guess" the output of a big model which then "checks" it for correctness. This is more of a performance feature, but you could also arrange it to accelerate a big model on a small GPU.

  • 70B Llama 2 at 35tokens/second on 4090
    1 project | /r/patient_hackernews | 14 Sep 2023

gptq

Posts with mentions or reviews of gptq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-12.
  • Do large language models need all those layers?
    1 project | news.ycombinator.com | 15 Dec 2023
    I think it's not that LLMs have redundant layers in general - it's a specific problem with OPT-66B, not anything else.

    An 2022 paper "Scaling Language Models: Methods, Analysis & Insights from Training Gopher" (http://arxiv.org/abs/2112.11446) has captured it well on page 103, Appendix G:

    > The general finding is that whilst compressing models for a particular application has seen success, it is difficult to compress them for the objective of language modelling over a diverse corpus.

    The appendix G explores various techniques like pruning and distillation but found that neither method was an efficient way to obtain better loss at lower number of parameters.

    So why does pruning work for OPT-66B in particular? I'm not sure but there are evidence that OPT-66B is an outlier: one evidence is in the GPTQ paper ("GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", https://arxiv.org/abs/2210.17323) that mentions in its footnote on its 7th page:

    > [2] Upon closer inspection of the OPT-66B model, it appears that this is correlated with the fact that this trained

  • 70B Llama 2 at 35tokens/second on 4090
    6 projects | news.ycombinator.com | 12 Sep 2023
    Can anyone provide any additional details on the EXL2[0]/GPTQ[1] quantisation, which seems to be the main reason for a speedup in this model?

    I had a quick look at the paper which is _reasonably_ clear, but if anyone else has any other sources that are easy to understand, or a quick explanation to give more insight into it, I'd appreciate it.

    [0] https://github.com/turboderp/exllamav2#exl2-quantization

    [1] https://arxiv.org/abs/2210.17323

  • OpenAssistant's RLHF Models
    1 project | /r/LocalLLaMA | 2 Jun 2023
    GPTQ is better than GGML quantization, because it reoptimizes the weights to compensate for the lost accuracy. With 4 bit and groupsize 128 it can approximate, the FP16 performance pretty good. GGML just does round to nearest (RTN) without reoptimizing the weights against some dataset (generally the C4 dataset, as per default GPTQ-for-LLaMA configuration). But llama.cpp could probably implement such a method themselves, the paper is freely available: https://arxiv.org/abs/2210.17323
  • The tiny corp raised $5.1M
    3 projects | news.ycombinator.com | 25 May 2023
    When you click on the strip link to preorder the tinybox, it is advertised as a box running LLaMA 65B FP16 for $15000.

    I can run LLaMA 65B GPTQ4b on my $2300 PC (used parts, Dual RTX 3090), and according to the GPTQ paper(§) the quality of the model will not suffer much at all by the quantization.

    (§) https://arxiv.org/abs/2210.17323

  • Newbie doesn't know what he's doing...
    1 project | /r/Oobabooga | 22 May 2023
  • Seeking clarification about LLM's, Tools, etc.. for developers.
    2 projects | /r/LocalLLaMA | 19 May 2023
    GPTQ is another quantization method, that works only for transformer model architectures. It quantizes the stored model weights in a non-linear fashion, and ends up having better quality compared to just linear quantization into a smaller data type. GPTQ has a triton and a cuda branch, which was tricky initially, as it lead to a lot of confusion and non-compatibility especially on windows.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.

    The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.

  • #StandAgainstFloats
    2 projects | /r/ProgrammerHumor | 13 May 2023
    This is the one everybody's using to quantize language models. It includes a link to the paper explaining their algorithm.

What are some alternatives?

When comparing exllamav2 and gptq you can also consider the following projects:

llama.cpp - LLM inference in C/C++

OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

triton - Development repository for the Triton language and compiler

SillyTavern - LLM Frontend for Power Users.

coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices

ChatGPT-AutoExpert - 🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

BlockMerge_Gradient - Merge Transformers language models by use of gradient parameters.

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.