llama VS GPTQ-for-LLaMa

Compare llama vs GPTQ-for-LLaMa and see what are their differences.

llama

Inference code for LLaMA models (by gmorenz)

GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ (by qwopqwop200)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
llama GPTQ-for-LLaMa
3 75
35 2,913
- -
1.6 8.6
about 1 year ago 9 months ago
Python
GNU General Public License v3.0 only Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llama

Posts with mentions or reviews of llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-13.
  • Alpaca- An Instruct Tuned Llama 7B. Responses on par with txt-DaVinci-3. Demo up
    9 projects | news.ycombinator.com | 13 Mar 2023
    > All the magic of "7B LLaMA running on a potato" seems to involve lowering precision down to f16 and then further quantizing to int4.

    LLaMa weights are f16s to start out with, no lowering necessary to get to there.

    You can stream weights from RAM to the GPU pretty efficiently. If you have >= 32GB ram and >=2GB vram my code here should work for you: https://github.com/gmorenz/llama/tree/gpu_offload

    There's probably a cleaner version of it somewhere else. Really you should only need >= 16 GB ram, but the (meta provided) code to load the initial weights is completely unnecessarily making two copies of the weights in RAM simultaneously.

  • LLaMA-7B in Pure C++ with full Apple Silicon support
    19 projects | news.ycombinator.com | 10 Mar 2023
    My code for this is very much not high quality, but I have a CPU + GPU + SSD combination: https://github.com/gmorenz/llama/tree/ssd

    Usage instructions in the commit message: https://github.com/facebookresearch/llama/commit/5be06e56056...

    At least with my hardware this runs at "[size of model]/[speed of SSD reads]" tokens per second, which (up to some possible further memory reduction so you can run larger batches at once on the same GPU) is a good as it gets when you need to read the whole model from disk each token.

    At a 125GB and a 2MB/s read (largest model, what I get from my ssd) that's 60 seconds per token (1 day per 1440 words), which isn't exactly practical. Which is really the issue here, if you need to stream the model from an SSD because you don't have enough RAM, it is just a fundamentally slow process.

    You could probably optimize quite a bit for batch throughput if you're ok with the latency though.

  • Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
    8 projects | news.ycombinator.com | 7 Mar 2023
    I don't know about this fork specifically, but in general yes absolutely.

    Even without enough ram, you can stream model weights from disk and run at [size of model/disk read speed] seconds per token.

    I'm doing that on a small GPU with this code, but it should be easy to get this working with the CPU as compute instead (and at least with my disk/CPU, I'm not even sure that it would run even slower, I think disk read would probably still be the bottleneck)

    https://github.com/gmorenz/llama/tree/ssd

GPTQ-for-LLaMa

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-10.
  • [P] Early in 2023 I put in a lot of work on a new machine learning project. Now I'm not sure what to do with it.
    1 project | /r/MachineLearning | 3 Dec 2023
    First I want to make it clear this is not a self promotion post. I hope many machine learning people come at me with questions or comments about this project. A little background about myself. I did work on the 4 bits quantization of LLaMA using GPTQ. (https://github.com/qwopqwop200/GPTQ-for-LLaMa). I've been studying AI in-depth for many years now.
  • GPT-4 Details Leaked
    3 projects | news.ycombinator.com | 10 Jul 2023
    Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .

    If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...

  • Rambling
    1 project | /r/PygmalionAI | 30 Jun 2023
    I use gptq-for-llama - from this https://github.com/qwopqwop200/GPTQ-for-LLaMa and Pygmalion 7B.
  • Now that ExLlama is out with reduced VRAM usage, are there any GPTQ models bigger than 7b which can fit onto an 8GB card?
    2 projects | /r/LocalLLaMA | 29 Jun 2023
    exllama is an optimized implementation of GPTQ-for-LLaMa, allowing you to run 4-bit quantized language models with GPU at great speeds.
  • GGML – AI at the Edge
    11 projects | news.ycombinator.com | 6 Jun 2023
    With a single NVIDIA 3090 and the fastest inference branch of GPTQ-for-LLAMA https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-i..., I get a healthy 10-15 tokens per second on the 30B models. IMO GGML is great (And I totally use it) but it's still not as fast as running the models on GPU for now.
  • New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
    4 projects | /r/LocalLLaMA | 2 Jun 2023
    And exactly what Triton version are they comparing against? I just tried the latest version of this, and on my 4090/12900K I get 77 tokens per second for Llama 7B-128g. My own GPTQ CUDA implementation gets 151 tokens/second on the same model, same hardware. That makes it 96% faster, whereas AWQ is only 79% faster. For 30B-128g I'm currently only getting a 110% speedup over Triton compared to their 178%, but it still seems a little disingenuous to compare against their own CUDA implementation only, when they're trying to present the quantization method as being faster for inference.
  • Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
    9 projects | /r/LocalLLaMA | 1 Jun 2023
    Thanks for the explanation. I think some repos, like text generation webui used gptq for llama (I don't know if it's this repo or another one), anyway most repo that I saw use external things (like gptq for llama)
  • How to use AMD GPU?
    4 projects | /r/LocalLLaMA | 1 Jun 2023
    cd ../.. git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install -r requirements.txt mkdir -p ../text-generation-webui/repositories ln -s ../../GPTQ-for-LLaMa ../text-generation-webui/repositories/GPTQ-for-LLaMa
  • Help needed with installing quant_cuda for the WebUI
    2 projects | /r/LocalLLaMA | 31 May 2023
    cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa pip install -r requirements.txt
  • The installed version of bitsandbytes was compiled without GPU support
    2 projects | /r/Oobabooga | 29 May 2023
    # To use the GPTQ models I need to Install GPTQ-for-LLaMa and the monkey patch mkdir repositories cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install ninja pip install -r requirements.txt cd cd text-generation-webui # download random model python download-model.py xxx/yyy # try to start the gui python server.py # It returns this warning but it runs bin /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32

What are some alternatives?

When comparing llama and GPTQ-for-LLaMa you can also consider the following projects:

llama.cpp - LLM inference in C/C++

ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.

qlora - QLoRA: Efficient Finetuning of Quantized LLMs

tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

llama - Inference code for Llama models

stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI