GPTQ-triton VS llama.cpp

Compare GPTQ-triton vs llama.cpp and see what are their differences.

GPTQ-triton

GPTQ inference Triton kernel (by fpgaminer)

llama.cpp

LLM inference in C/C++ (by ggerganov)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
GPTQ-triton llama.cpp
1 782
262 58,425
- -
4.3 10.0
about 1 year ago 5 days ago
Jupyter Notebook C++
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPTQ-triton

Posts with mentions or reviews of GPTQ-triton. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-09.
  • The LLama Effect: Leak Sparked a Series of Open Source Alternatives to ChatGPT
    9 projects | news.ycombinator.com | 9 Apr 2023
    Slightly tangential, but I had intended to start playing around with LLaMA and building some agents. I got the 4-bit versions up and running on my 3090 before I was quickly nerd snipped by a performance problem...

    The popular repo for quantizing and running LLaMA is the GPTQ-for-llama repo on github, which mostly copies from the GPTQ authors. The CUDA kernels are needed to support the specific kind of quantization that GPTQ does.

    Problem is, while those CUDA kernels are great at short prompt lengths, they fall apart at long prompt lengths. You could see people complaining about this, seeing their inference speeds slowly tanking as their chats/prompts/etc got longer.

    So off I went, spending the last week or so re-writing the kernels in Triton. I've now got my kernels running faster than the CUDA kernels at all sizes [0]. And I'm busily optimizing and fusing other areas. The latest MLP fusion kernels gave another couple percentage boost in performance.

    Yet I still haven't actually played with LLaMA and made those agents I wanted... sigh And now I'm debating diving into the Triton source code, because they removed integer unpacking instructions during one of their recent rewrites. So I had to use a hack in my kernels which causes them to use more bandwidth than they otherwise should. Think of the performance they could have with those! ... (someone please stop me...)

    [0] https://github.com/fpgaminer/GPTQ-triton/

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-07.

What are some alternatives?

When comparing GPTQ-triton and llama.cpp you can also consider the following projects:

llama - Inference code for Llama models

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

dmca - Repository with text of DMCA takedown notices as received. GitHub does not endorse or adopt any assertion contained in the following notices. Users identified in the notices are presumed innocent until proven guilty. Additional information about our DMCA policy can be found at

gpt4all - gpt4all: run open-source LLMs anywhere

alpaca-lora - Instruct-tune LLaMA on consumer hardware

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

ggml - Tensor library for machine learning

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧