coriander VS gptq

Compare coriander vs gptq and see what are their differences.

coriander

Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices (by hughperkins)

gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers". (by IST-DASLab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
coriander gptq
3 8
832 1,711
- 3.0%
0.0 4.4
3 months ago about 1 month ago
LLVM Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

coriander

Posts with mentions or reviews of coriander. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-14.

gptq

Posts with mentions or reviews of gptq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-12.
  • Do large language models need all those layers?
    1 project | news.ycombinator.com | 15 Dec 2023
    I think it's not that LLMs have redundant layers in general - it's a specific problem with OPT-66B, not anything else.

    An 2022 paper "Scaling Language Models: Methods, Analysis & Insights from Training Gopher" (http://arxiv.org/abs/2112.11446) has captured it well on page 103, Appendix G:

    > The general finding is that whilst compressing models for a particular application has seen success, it is difficult to compress them for the objective of language modelling over a diverse corpus.

    The appendix G explores various techniques like pruning and distillation but found that neither method was an efficient way to obtain better loss at lower number of parameters.

    So why does pruning work for OPT-66B in particular? I'm not sure but there are evidence that OPT-66B is an outlier: one evidence is in the GPTQ paper ("GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", https://arxiv.org/abs/2210.17323) that mentions in its footnote on its 7th page:

    > [2] Upon closer inspection of the OPT-66B model, it appears that this is correlated with the fact that this trained

  • 70B Llama 2 at 35tokens/second on 4090
    6 projects | news.ycombinator.com | 12 Sep 2023
    Can anyone provide any additional details on the EXL2[0]/GPTQ[1] quantisation, which seems to be the main reason for a speedup in this model?

    I had a quick look at the paper which is _reasonably_ clear, but if anyone else has any other sources that are easy to understand, or a quick explanation to give more insight into it, I'd appreciate it.

    [0] https://github.com/turboderp/exllamav2#exl2-quantization

    [1] https://arxiv.org/abs/2210.17323

  • OpenAssistant's RLHF Models
    1 project | /r/LocalLLaMA | 2 Jun 2023
    GPTQ is better than GGML quantization, because it reoptimizes the weights to compensate for the lost accuracy. With 4 bit and groupsize 128 it can approximate, the FP16 performance pretty good. GGML just does round to nearest (RTN) without reoptimizing the weights against some dataset (generally the C4 dataset, as per default GPTQ-for-LLaMA configuration). But llama.cpp could probably implement such a method themselves, the paper is freely available: https://arxiv.org/abs/2210.17323
  • The tiny corp raised $5.1M
    3 projects | news.ycombinator.com | 25 May 2023
    When you click on the strip link to preorder the tinybox, it is advertised as a box running LLaMA 65B FP16 for $15000.

    I can run LLaMA 65B GPTQ4b on my $2300 PC (used parts, Dual RTX 3090), and according to the GPTQ paper(§) the quality of the model will not suffer much at all by the quantization.

    (§) https://arxiv.org/abs/2210.17323

  • Newbie doesn't know what he's doing...
    1 project | /r/Oobabooga | 22 May 2023
  • Seeking clarification about LLM's, Tools, etc.. for developers.
    2 projects | /r/LocalLLaMA | 19 May 2023
    GPTQ is another quantization method, that works only for transformer model architectures. It quantizes the stored model weights in a non-linear fashion, and ends up having better quality compared to just linear quantization into a smaller data type. GPTQ has a triton and a cuda branch, which was tricky initially, as it lead to a lot of confusion and non-compatibility especially on windows.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.

    The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.

  • #StandAgainstFloats
    2 projects | /r/ProgrammerHumor | 13 May 2023
    This is the one everybody's using to quantize language models. It includes a link to the paper explaining their algorithm.

What are some alternatives?

When comparing coriander and gptq you can also consider the following projects:

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

triton - Development repository for the Triton language and compiler

RadeonClockEnforcer - AHK script that forces maximum clocks while important applications are open. Automates OverdriveNTool's clock/voltage switching functionality for GPU and VRAM, with the purpose of enforcing maximum clocks while whitelisted applications are in focus.

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

HIPIFY - HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY]

llama.cpp - LLM inference in C/C++

sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".