cutlass
iree
Our great sponsors
cutlass | iree | |
---|---|---|
16 | 10 | |
4,401 | 2,337 | |
13.5% | 4.1% | |
8.8 | 10.0 | |
3 days ago | about 19 hours ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cutlass
-
AI’s compute fragmentation: what matrix multiplication teaches us
> we used tensor cores and managed to get back fp32 accuracy with 3 rounds of the things
Hey are you referring to 3xTF32 (https://github.com/NVIDIA/cutlass/tree/master/examples/28_am...)? IMO this is a perfect example where proper abstraction could save engineers non-trivial amount of time - imagine a compiler stack which allows 3xTF32 as a normal dtype and subsequent analysis compatible with this special dtype :-)
- With LLVM and MLIR, is manual cuda optimizing still important?
-
How to Optimize a CUDA Matmul Kernel for CuBLAS-Like Performance: A Worklog
This is a great post for people who are new to optimizing GPU code.
It is interesting to see that the author got this far without interchanging the innermost loop over k to the outermost loop, as is done in CUTLASS (https://github.com/NVIDIA/cutlass).
As you can see in this blog post the code ends up with a lot of compile-time constants (e.g. BLOCKSIZE, BM, BN, BK, TM, TN) one way to optimize this code further is to use an auto-tuner to find the optimal value for all of these parameters for your GPU and problem size, for example Kernel Tuner (https://github.com/KernelTuner/kernel_tuner)
-
pytorch example to actually see anything near 83 TFLOP/s on a RTX 4090?
Some examples here have a benchmark: https://github.com/NVIDIA/cutlass/blob/master/examples/24_gemm_grouped/gemm_grouped.cu
-
[D] What are some good resources to learn CUDA programming?
If you already know some C++, the Nvidia devblog is a great resource. Going further, Cub and Cutlass provide examples of efficient implementations for key operations at all hardware levels. Finally, this is more anecdotal but I always start my lectures on Cuda programming with the pictures in this doc page, to provide some intuition on the different memory layers that you can leverage to speed up a program. In any case, good luck :-)
-
PyTorch on Apple M1 Faster Than TensorFlow-Metal
So with Tensorcores you use TF32 which is more like FP19-ish and the marketing makes you think you get 8x the performance. But if you want actual FP32 precision you will need something like [1] but then your performance in the Tensorcore path is _only_ 2X faster than the SIMT path.
I'll leave the prefix sum for other devs who know more :D
https://github.com/NVIDIA/cutlass/blob/master/examples/27_am...
//part of nod.ai/shark team
- 100% Accurate Binary Neuronal Networks
iree
-
Calyx, a Compiler Infrastructure for Accelerator Generators
How is this different than the mlir infrastructure of llvm and xla implemented in https://iree.dev/?
-
Running pre-trained ML models in Godot
So I have been developing this GDExtension called iree.gd. It is mission to embed IREE, another cool project that compiles and runs ML models, into Godot. It took me quite a while, but finally It has reached alpha. Hope you guys could check it out the sample.
-
Nvidia H200 Tensor Core GPU
I am going to paste a cousin comment:
StableHLO[1] is an interesting project that might help AMD here:
> Our goal is to simplify and accelerate ML development by creating more interoperability between various ML frameworks (such as TensorFlow, JAX and PyTorch) and ML compilers (such as XLA and IREE).
From there, their goal would most likely be to work with XLA/OpenXLA teams on XLA[3] and IREE[2] to make RoCM a better backend.
[1] https://github.com/openxla/stablehlo
-
Nvidia reveals new A.I. chip, says costs of running LLMs will drop significantly
I want to promote that the Google project https://github.com/openxla/iree exists and IREE acts as a way to turn Tensorflow, Pytorch, and MLIR workflows to compute on cpu, vulkan compute, cuda, rocm, metal and others.
https://github.com/RechieKho/IREE.gd -- RechieKho and I collaborate on making this work for Godot Engine, but IREE.gd is at a proof of concept stage.
-
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.
[1]: https://kompute.cc/
-
Requiem for Piet-GPU-Hal
In the ML section you mentioned Kompute and MediaPipe. Have you seen IREE? It has a Vulkan-like compute-only HAL. https://github.com/iree-org/iree
-
PyTorch on Apple M1 Faster Than TensorFlow-Metal
Exactly the kind of things we've been talking about! A fun and challenging tradeoff space and it's always great to connect with others!
Ahh linebender - I hadn't connected the name with your github account - piet-gpu is great, as is your blog! Also, for anyone skimming the comments this talk is fantastic and I share it with anyone new to the GPGPU space: https://www.youtube.com/watch?v=DZRn_jNZjbw
We waffled a bit with the API granularity in the beginning and it's taken building out most of the rest of the project in order to nail it down (the big refactor still pending). The biggest issue is that in simple models we'll end up emitting a single command buffer but anything with control flow (that we can't predicate), data dependencies (sparsity, thresholding, etc), or CPU work in the middle (IO, custom user code, etc) can break that up. We also hit cases where we need to flush work - such as if we run out of usable memory and need to defragment or resize our pools. We want to be able to (but aren't yet) reuse command buffers (CUDA graphs, etc) and that requires being able to both cache them and recreate them on demand (if we resize a pool we have to invalidate all cached command buffers using those resources, as update-after-bind is not universally available and if shapes change there's big ripples). Since most models beyond simple vision ones are ~thousands of dispatches it also lets us better integrate into multithreaded applications like you mention as apps can record commands for themselves in parallel without synchronization. It still would be nice to have certain operations inlined, though, and for that we want to allow custom hooks that we call into to add commands to the command buffers, turning things inside-out to make small amounts of work like image transformations in-between model layers possible (I'm really hoping we can avoid modeling the entire graphics pipeline in the compiler and this would be a way around that :). We haven't yet started on scheduling across queues but that's also very interesting especially in multi-GPU cases (with x4/x8 GPUs being common in datacenters, or NUMA CPU clusters that can be scheduled similarly).
We're fully open source (https://github.com/google/iree) but have been operating quietly while we get the groundwork in place - it's taken some time but now we're finally starting to stumble into success on certain problem categories (like transformers as in the post). Right now it's mostly just organized as a systems/compiler nerd honeypot for people looking for an ML/number crunching framework that (purposefully) doesn't look like any of the existing ones :)
Would love to chat more - even if just to commiserate over GPU APIs and such - everyone is welcome on the discord where a bunch of us nerds have gathered or we could grab virtual coffee (realized just now that this hn acct is ancient - I'm [email protected] :)
-
WONNX: Deep Learning on WebGPU using the ONNX format.
If you're interested in really pushing yourself, perhaps you can look at https://github.com/google/iree?
-
GPU computing on Apple Silicon
This doesn't answer your question, but it would be cool if we had something based on MLIR for GPU compute. From what I've read, it closes the gap between NVIDIA and other GPU vendors a lot more than pure compute shaders. e.g. ONNX-MLIR, PlaidML, and IREE.
What are some alternatives?
onnx-mlir - Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure
torch-mlir - The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
onnx - Open standard for machine learning interoperability
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
plaidml - PlaidML is a framework for making deep learning work everywhere.
Emu - The write-once-run-anywhere GPGPU library for Rust
rust-objc - Objective-C Runtime bindings and wrapper for Rust.
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
GPU-Puzzles - Solve puzzles. Learn CUDA.
triton - Development repository for the Triton language and compiler
shark-samples