Halide
triton
Halide | triton | |
---|---|---|
44 | 33 | |
5,916 | 13,527 | |
0.5% | 2.0% | |
9.3 | 9.9 | |
6 days ago | 5 days ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Halide
- Halide: A language for fast, portable computation on images and tensors
-
Show HN: Flash Attention in ~100 lines of CUDA
If CPU/GPU execution speed is the goal while simultaneously code golfing the source size, https://halide-lang.org/ might have come in handy.
- Halide v17.0.0
-
From slow to SIMD: A Go optimization story
This is a task where Halide https://halide-lang.org/ could really shine! It disconnects logic from scheduling (unrolling, vectorizing, tiling, caching intermediates etc), so every step the author describes in the article is a tunable in halide. halide doesn't appear to have bindings for golang so calling C++ from go might be the only viable option.
-
Implementing Mario's Stack Blur 15 times in C++ (with tests and benchmarks)
Probably would have been much easier to do 15 times in https://halide-lang.org/
The idea behind Halide is that scheduling memory access patterns is critical to performance. But, access patterns being interwoven into arithmetic algorithms makes them difficult to modify separately.
So, in Halide you specify the arithmetic and the schedule separately so you can rapidly iterate on either.
- Making Hard Things Easy
-
Deepmind Alphadev: Faster sorting algorithms discovered using deep RL
It is not the sorting per-se which was improved here, but sorting (particularly short sequences) on modern CPUs with really the complexity being on the difficulty of predicting what will work quickly on these modern CPUs.
Doing an empirical algorithm search to find which algorithms fit well on modern CPUs/memory systems is pretty common, see e.g. FFTW, ATLAS, https://halide-lang.org/
-
Two-tier programming language
Halide https://halide-lang.org/
- Best book on writing an optimizing compiler (inlining, types, abstract interpretation)?
-
Blog Post: Can You Trust a Compiler to Optimize Your Code?
It doesn’t apply in this case, but in general if you really want the best vectorization I would suggest using https://halide-lang.org instead of trying to coerce your compiler.
triton
-
Triton Fork for Windows Support
Things might have changed since then, but I personally contributed a few Windows support-related changes back in 2021 as an independent contributor: https://github.com/triton-lang/triton/pulls?q=is%3Apr+is%3Ac...
- An Interview with AMD CEO Lisa Su About Solving Hard Problems
- OpenAI Triton: language and compiler for highly efficient Deep-Learning
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
There's a ton of cool opportunity in the runtime layer. I've been keeping my eye on the compiler-based approaches. From what I've gathered many of the larger "production" inference tools use compilers:
- https://github.com/openai/triton
- Core Functionality for AMD #1983
- Project name easily confused with Nvidia triton
-
Nvidia's CUDA Monopoly
Does anyone have more inside knowledge from OpenAI or AMD on AMDGPU support for Triton?
I see this:
https://github.com/openai/triton/issues/1073
But it's not clear to me if we will see AMD GPUs as first class citizens for pytorch in the future?
- @soumithchintala (Cofounded and lead @PyTorch at Meta) on Twitter: I'm fairly puzzled by $NVDA skyrocketing... (cont.)
-
The tiny corp raised $5.1M
I thought this was a good overview of the idea Triton can circumvent the CUDA moat: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
It also looks like they added MLIR backend to Triton though I wonder if Mojo has advantages since it was built on MLIR? https://github.com/openai/triton/pull/1004
-
Anyone hosting a local LLM server
I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.
What are some alternatives?
taichi - Productive, portable, and performant GPU programming in Python.
cuda-python - CUDA Python Low-level Bindings
futhark - :boom::computer::boom: A data-parallel functional programming language
cutlass - CUDA Templates for Linear Algebra Subroutines
Image-Convolutaion-OpenCL
GPU-Puzzles - Solve puzzles. Learn CUDA.
ponyc - Pony is an open-source, actor-model, capabilities-secure, high performance programming language
web-llm - High-performance In-browser LLM Inference Engine
TensorOperations.jl - Julia package for tensor contractions and related operations
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
qoi - The “Quite OK Image Format” for fast, lossless image compression
maxas - Assembler for NVIDIA Maxwell architecture