amx VS GPTQ-for-LLaMa

Compare amx vs GPTQ-for-LLaMa and see what are their differences.

amx

Apple AMX Instruction Set (by corsix)

GPTQ-for-LLaMa

4 bits quantization of LLaMA using GPTQ (by qwopqwop200)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
amx GPTQ-for-LLaMa
18 75
859 2,916
- -
4.1 8.6
2 months ago 9 months ago
C Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

amx

Posts with mentions or reviews of amx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-28.
  • Optimize sgemm on RISC-V platform
    6 projects | news.ycombinator.com | 28 Feb 2024
    I am talking about the matrix/vector coprocessor (AMX). You can find some reverse-engineered documentation here: https://github.com/corsix/amx

    On M3 a singe matrix block can achieve ~ 1TFLOP on DGEMM, I assume it will be closer to 4TFLOPS for SGEMM. The Max variants have two such blocks. Didn't do precise benchmarking myself, but switching Python/R matrix libraries to use Apple's BLAS result in 5-6x perf improvement on matrix heavy code for me.

  • Intel AMX
    4 projects | news.ycombinator.com | 19 Jan 2024
    It's really cool. I hope it becomes more common for training/inference/numerics capable accelerators to be included in consumer hardware.

    Apple's AMX is really under-documented, while the instructions were reverse engineered, Virtually no benchmarks are available comparing current chip generations, models and variants.

    https://github.com/corsix/amx

  • Why do x86 processors take up so much energy when compared to ARM?
    1 project | /r/hardware | 8 Dec 2023
  • Bfloat16 support coming to Apple's Metal and PyTorch [video]
    1 project | news.ycombinator.com | 3 Jul 2023
    Visible in the unofficial documentation for AMX instructions too - M2 only bf16 functionality - https://github.com/corsix/amx/blob/main/matfp.md
  • LLaMA-7B in Pure C++ with full Apple Silicon support
    19 projects | news.ycombinator.com | 10 Mar 2023
    Confusingly there are 2 mechanisms to do matrix operations on the new apple hardware - AMX (https://github.com/corsix/amx) - and the ANE (apple neural engine) - which is enabled by CoreML. This code does not run on the neural engine but the author has a branch for his whisper.cpp project which uses it here: https://github.com/ggerganov/whisper.cpp/pull/566 - so it may not be long before we see it applied here as well. All of this is to say that it actually could get significantly faster if some of this work was able to be handed to the ANE with CoreML.
  • Linux 6.2: The first mainstream Linux kernel for Apple M1 chips arrives
    7 projects | news.ycombinator.com | 20 Feb 2023
    really? seems pretty well documented here: https://github.com/corsix/amx
  • AMX: The Secret Apple M1 Coprocessor
    1 project | /r/apple | 14 Dec 2022
    Article is almost two years old, and has a huge correction at the bottom. It's just a proprietary ISA extension, there's even a repo documenting what's been reverse engineered.
  • corsix/amx: Apple AMX Instruction Set
    1 project | /r/programming | 9 Dec 2022
  • Show HN: Port of OpenAI's Whisper model in C/C++
    9 projects | news.ycombinator.com | 6 Dec 2022
    You are correct, in that those are the four

    My understanding is that the AMX is more tightly wound with the CPU, ultimately being accessible via an instruction set (https://github.com/corsix/amx), and it is useful if you need to do matrix multiplications interleaved with other CPU tasks. A common example would be a VIO loop or something where you want that data in the CPU caches.

    The GPU and Neural Engine are not that – they take some time to set up and initialize. They also can parallelize tasks to a much higher degree. The GPU is more generalizable, because you can write compute shaders to do anything in parallel, but it uses a lot of resources. I'll have to check out the PR to see how exactly the MPS shaders match up with the task at hand, because you could also consider writing Metal compute shaders by hand.

    I know the least about the ANE, but it has specific hardware for running ML models, and you have to process the weights ahead of time to make sure they are in the right format. It can run ML models very efficiently and is the most battery friendly.

  • Ask HN: Are there any undocumented ISA extensions used in Linux systems?
    1 project | news.ycombinator.com | 19 Oct 2022
    If someone were to build a Linux system with proprietary ISA extensions, how would they do it given Linux is open source? Are there any examples of this being done? Would it be possible at all?

    I got inspiration from this (https://github.com/corsix/amx) and I wondered if someone has done it before on a Linux-based system. I understand a userspace library could be created to access those instructions from userspace, but how would then they be implemented in the kernel? Through a proprietary kernel module built using a custom compiler? Or is that not needed at all and the library could just run on the processor taking advantage of the proprietary extensions?

GPTQ-for-LLaMa

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-10.
  • [P] Early in 2023 I put in a lot of work on a new machine learning project. Now I'm not sure what to do with it.
    1 project | /r/MachineLearning | 3 Dec 2023
    First I want to make it clear this is not a self promotion post. I hope many machine learning people come at me with questions or comments about this project. A little background about myself. I did work on the 4 bits quantization of LLaMA using GPTQ. (https://github.com/qwopqwop200/GPTQ-for-LLaMa). I've been studying AI in-depth for many years now.
  • GPT-4 Details Leaked
    3 projects | news.ycombinator.com | 10 Jul 2023
    Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .

    If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...

  • Rambling
    1 project | /r/PygmalionAI | 30 Jun 2023
    I use gptq-for-llama - from this https://github.com/qwopqwop200/GPTQ-for-LLaMa and Pygmalion 7B.
  • Now that ExLlama is out with reduced VRAM usage, are there any GPTQ models bigger than 7b which can fit onto an 8GB card?
    2 projects | /r/LocalLLaMA | 29 Jun 2023
    exllama is an optimized implementation of GPTQ-for-LLaMa, allowing you to run 4-bit quantized language models with GPU at great speeds.
  • GGML – AI at the Edge
    11 projects | news.ycombinator.com | 6 Jun 2023
    With a single NVIDIA 3090 and the fastest inference branch of GPTQ-for-LLAMA https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-i..., I get a healthy 10-15 tokens per second on the 30B models. IMO GGML is great (And I totally use it) but it's still not as fast as running the models on GPU for now.
  • New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
    4 projects | /r/LocalLLaMA | 2 Jun 2023
    And exactly what Triton version are they comparing against? I just tried the latest version of this, and on my 4090/12900K I get 77 tokens per second for Llama 7B-128g. My own GPTQ CUDA implementation gets 151 tokens/second on the same model, same hardware. That makes it 96% faster, whereas AWQ is only 79% faster. For 30B-128g I'm currently only getting a 110% speedup over Triton compared to their 178%, but it still seems a little disingenuous to compare against their own CUDA implementation only, when they're trying to present the quantization method as being faster for inference.
  • Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
    9 projects | /r/LocalLLaMA | 1 Jun 2023
    Thanks for the explanation. I think some repos, like text generation webui used gptq for llama (I don't know if it's this repo or another one), anyway most repo that I saw use external things (like gptq for llama)
  • How to use AMD GPU?
    4 projects | /r/LocalLLaMA | 1 Jun 2023
    cd ../.. git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install -r requirements.txt mkdir -p ../text-generation-webui/repositories ln -s ../../GPTQ-for-LLaMa ../text-generation-webui/repositories/GPTQ-for-LLaMa
  • Help needed with installing quant_cuda for the WebUI
    2 projects | /r/LocalLLaMA | 31 May 2023
    cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa pip install -r requirements.txt
  • The installed version of bitsandbytes was compiled without GPU support
    2 projects | /r/Oobabooga | 29 May 2023
    # To use the GPTQ models I need to Install GPTQ-for-LLaMa and the monkey patch mkdir repositories cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install ninja pip install -r requirements.txt cd cd text-generation-webui # download random model python download-model.py xxx/yyy # try to start the gui python server.py # It returns this warning but it runs bin /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32

What are some alternatives?

When comparing amx and GPTQ-for-LLaMa you can also consider the following projects:

emacs-pure

llama.cpp - LLM inference in C/C++

whisper.cpp - Port of OpenAI's Whisper model in C/C++

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

whisper.cpp - Port of OpenAI's Whisper model in C/C++

qlora - QLoRA: Efficient Finetuning of Quantized LLMs

llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

amx-rs - Rust wrapper for Apple Matrix Coprocessor (AMX) instructions

stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI