GPTQ-triton

GPTQ inference Triton kernel (by fpgaminer)

GPTQ-triton Alternatives

Similar projects and alternatives to GPTQ-triton

  • llama.cpp

    LLM inference in C/C++

  • llama

    Inference code for Llama models

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • dmca

    184 GPTQ-triton VS dmca

    Repository with text of DMCA takedown notices as received. GitHub does not endorse or adopt any assertion contained in the following notices. Users identified in the notices are presumed innocent until proven guilty. Additional information about our DMCA policy can be found at

  • alpaca-lora

    107 GPTQ-triton VS alpaca-lora

    Instruct-tune LLaMA on consumer hardware

  • FastChat

    An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better GPTQ-triton alternative or higher similarity.

GPTQ-triton reviews and mentions

Posts with mentions or reviews of GPTQ-triton. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-09.
  • The LLama Effect: Leak Sparked a Series of Open Source Alternatives to ChatGPT
    9 projects | news.ycombinator.com | 9 Apr 2023
    Slightly tangential, but I had intended to start playing around with LLaMA and building some agents. I got the 4-bit versions up and running on my 3090 before I was quickly nerd snipped by a performance problem...

    The popular repo for quantizing and running LLaMA is the GPTQ-for-llama repo on github, which mostly copies from the GPTQ authors. The CUDA kernels are needed to support the specific kind of quantization that GPTQ does.

    Problem is, while those CUDA kernels are great at short prompt lengths, they fall apart at long prompt lengths. You could see people complaining about this, seeing their inference speeds slowly tanking as their chats/prompts/etc got longer.

    So off I went, spending the last week or so re-writing the kernels in Triton. I've now got my kernels running faster than the CUDA kernels at all sizes [0]. And I'm busily optimizing and fusing other areas. The latest MLP fusion kernels gave another couple percentage boost in performance.

    Yet I still haven't actually played with LLaMA and made those agents I wanted... sigh And now I'm debating diving into the Triton source code, because they removed integer unpacking instructions during one of their recent rewrites. So I had to use a hack in my kernels which causes them to use more bandwidth than they otherwise should. Think of the performance they could have with those! ... (someone please stop me...)

    [0] https://github.com/fpgaminer/GPTQ-triton/

Stats

Basic GPTQ-triton repo stats
1
258
4.3
12 months ago

fpgaminer/GPTQ-triton is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of GPTQ-triton is Jupyter Notebook.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com