GPTQ-triton
llama.cpp
GPTQ-triton | llama.cpp | |
---|---|---|
1 | 782 | |
262 | 58,425 | |
- | - | |
4.3 | 10.0 | |
about 1 year ago | 5 days ago | |
Jupyter Notebook | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-triton
-
The LLama Effect: Leak Sparked a Series of Open Source Alternatives to ChatGPT
Slightly tangential, but I had intended to start playing around with LLaMA and building some agents. I got the 4-bit versions up and running on my 3090 before I was quickly nerd snipped by a performance problem...
The popular repo for quantizing and running LLaMA is the GPTQ-for-llama repo on github, which mostly copies from the GPTQ authors. The CUDA kernels are needed to support the specific kind of quantization that GPTQ does.
Problem is, while those CUDA kernels are great at short prompt lengths, they fall apart at long prompt lengths. You could see people complaining about this, seeing their inference speeds slowly tanking as their chats/prompts/etc got longer.
So off I went, spending the last week or so re-writing the kernels in Triton. I've now got my kernels running faster than the CUDA kernels at all sizes [0]. And I'm busily optimizing and fusing other areas. The latest MLP fusion kernels gave another couple percentage boost in performance.
Yet I still haven't actually played with LLaMA and made those agents I wanted... sigh And now I'm debating diving into the Triton source code, because they removed integer unpacking instructions during one of their recent rewrites. So I had to use a hack in my kernels which causes them to use more bandwidth than they otherwise should. Think of the performance they could have with those! ... (someone please stop me...)
[0] https://github.com/fpgaminer/GPTQ-triton/
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
llama - Inference code for Llama models
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
dmca - Repository with text of DMCA takedown notices as received. GitHub does not endorse or adopt any assertion contained in the following notices. Users identified in the notices are presumed innocent until proven guilty. Additional information about our DMCA policy can be found at
gpt4all - gpt4all: run open-source LLMs anywhere
alpaca-lora - Instruct-tune LLaMA on consumer hardware
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧