triton
cuda-python
Our great sponsors
triton | cuda-python | |
---|---|---|
30 | 2 | |
10,575 | 731 | |
8.0% | 4.8% | |
9.9 | 5.1 | |
about 13 hours ago | 18 days ago | |
C++ | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triton
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
There's a ton of cool opportunity in the runtime layer. I've been keeping my eye on the compiler-based approaches. From what I've gathered many of the larger "production" inference tools use compilers:
-
Nvidia's CUDA Monopoly
Does anyone have more inside knowledge from OpenAI or AMD on AMDGPU support for Triton?
I see this:
https://github.com/openai/triton/issues/1073
But it's not clear to me if we will see AMD GPUs as first class citizens for pytorch in the future?
-
The tiny corp raised $5.1M
I thought this was a good overview of the idea Triton can circumvent the CUDA moat: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
It also looks like they added MLIR backend to Triton though I wonder if Mojo has advantages since it was built on MLIR? https://github.com/openai/triton/pull/1004
Yeah, also see AMD engineers working on Triton support here: https://github.com/openai/triton/issues/46
-
Anyone hosting a local LLM server
I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.
-
Mojo – a new programming language for all AI developers
Very cool development. There is too much busy work going from development to test to production. This will help to unify everything. OpenAI Triton https://github.com/openai/triton/ is going for a similar goal. But this is a more fundamental approach.
-
ChatGDB, the GPT-Powered GDB Assistant
https://github.com/openai/triton/pull/1358#issue-1628393794
>One fun thing - after tracking down the code to the block of C++ code, ChatGPT-4 is what actually found the memory leak for me :)
although I can't imagine what the prompt was.
-
Pytorch 2.0 released
Doesn't look like it, not yet. From their GitHub:
- AMD's AI Chief: Why Now Is The 'Perfect Time' To Lean Into Artificial Intelligence | IBD
-
dfdx v0.9.0 - nightly convs & transformers, broadcasting/reducing/selecting from any axis, and more!
In my opinion one of the most promising paths to replicating the Python/C++/CUDA deep learning ecosystem is the Triton compiler, which makes writing efficient kernel much much simpler than with CUDA and which can be embedded in other languages. It currently only supports Python, but there was some activity at one point around integrating Triton with Rust at one point.
cuda-python
What are some alternatives?
Halide - a language for fast, portable data-parallel computation
GPU-Puzzles - Solve puzzles. Learn CUDA.
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
cutlass - CUDA Templates for Linear Algebra Subroutines
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
maxas - Assembler for NVIDIA Maxwell architecture
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
gptq - Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
julia - The Julia Programming Language
flexible-vectors - Vector operations for WebAssembly
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.