triton
gptq
Our great sponsors
triton | gptq | |
---|---|---|
30 | 8 | |
10,981 | 1,692 | |
7.9% | 5.7% | |
9.9 | 4.4 | |
3 days ago | about 1 month ago | |
C++ | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triton
- OpenAI Triton: language and compiler for highly efficient Deep-Learning
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
There's a ton of cool opportunity in the runtime layer. I've been keeping my eye on the compiler-based approaches. From what I've gathered many of the larger "production" inference tools use compilers:
- https://github.com/openai/triton
- Core Functionality for AMD #1983
- Project name easily confused with Nvidia triton
-
Nvidia's CUDA Monopoly
Does anyone have more inside knowledge from OpenAI or AMD on AMDGPU support for Triton?
I see this:
https://github.com/openai/triton/issues/1073
But it's not clear to me if we will see AMD GPUs as first class citizens for pytorch in the future?
- @soumithchintala (Cofounded and lead @PyTorch at Meta) on Twitter: I'm fairly puzzled by $NVDA skyrocketing... (cont.)
-
The tiny corp raised $5.1M
I thought this was a good overview of the idea Triton can circumvent the CUDA moat: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
It also looks like they added MLIR backend to Triton though I wonder if Mojo has advantages since it was built on MLIR? https://github.com/openai/triton/pull/1004
-
Anyone hosting a local LLM server
I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.
- Triton: Runtime for highly efficient custom Deep-Learning primitives
-
Mojo – a new programming language for all AI developers
Very cool development. There is too much busy work going from development to test to production. This will help to unify everything. OpenAI Triton https://github.com/openai/triton/ is going for a similar goal. But this is a more fundamental approach.
gptq
-
Do large language models need all those layers?
I think it's not that LLMs have redundant layers in general - it's a specific problem with OPT-66B, not anything else.
An 2022 paper "Scaling Language Models: Methods, Analysis & Insights from Training Gopher" (http://arxiv.org/abs/2112.11446) has captured it well on page 103, Appendix G:
> The general finding is that whilst compressing models for a particular application has seen success, it is difficult to compress them for the objective of language modelling over a diverse corpus.
The appendix G explores various techniques like pruning and distillation but found that neither method was an efficient way to obtain better loss at lower number of parameters.
So why does pruning work for OPT-66B in particular? I'm not sure but there are evidence that OPT-66B is an outlier: one evidence is in the GPTQ paper ("GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", https://arxiv.org/abs/2210.17323) that mentions in its footnote on its 7th page:
> [2] Upon closer inspection of the OPT-66B model, it appears that this is correlated with the fact that this trained
-
70B Llama 2 at 35tokens/second on 4090
Can anyone provide any additional details on the EXL2[0]/GPTQ[1] quantisation, which seems to be the main reason for a speedup in this model?
I had a quick look at the paper which is _reasonably_ clear, but if anyone else has any other sources that are easy to understand, or a quick explanation to give more insight into it, I'd appreciate it.
[0] https://github.com/turboderp/exllamav2#exl2-quantization
[1] https://arxiv.org/abs/2210.17323
-
OpenAssistant's RLHF Models
GPTQ is better than GGML quantization, because it reoptimizes the weights to compensate for the lost accuracy. With 4 bit and groupsize 128 it can approximate, the FP16 performance pretty good. GGML just does round to nearest (RTN) without reoptimizing the weights against some dataset (generally the C4 dataset, as per default GPTQ-for-LLaMA configuration). But llama.cpp could probably implement such a method themselves, the paper is freely available: https://arxiv.org/abs/2210.17323
-
The tiny corp raised $5.1M
When you click on the strip link to preorder the tinybox, it is advertised as a box running LLaMA 65B FP16 for $15000.
I can run LLaMA 65B GPTQ4b on my $2300 PC (used parts, Dual RTX 3090), and according to the GPTQ paper(§) the quality of the model will not suffer much at all by the quantization.
(§) https://arxiv.org/abs/2210.17323
- Newbie doesn't know what he's doing...
-
Seeking clarification about LLM's, Tools, etc.. for developers.
GPTQ is another quantization method, that works only for transformer model architectures. It quantizes the stored model weights in a non-linear fashion, and ends up having better quality compared to just linear quantization into a smaller data type. GPTQ has a triton and a cuda branch, which was tricky initially, as it lead to a lot of confusion and non-compatibility especially on windows.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
-
#StandAgainstFloats
This is the one everybody's using to quantize language models. It includes a link to the paper explaining their algorithm.
What are some alternatives?
cuda-python - CUDA Python Low-level Bindings
OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
Halide - a language for fast, portable data-parallel computation
coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
GPU-Puzzles - Solve puzzles. Learn CUDA.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
llama.cpp - LLM inference in C/C++
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
cutlass - CUDA Templates for Linear Algebra Subroutines
HIPIFY - HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY]