compute-shader-101
rust-gpu
compute-shader-101 | rust-gpu | |
---|---|---|
8 | 82 | |
489 | 6,952 | |
2.7% | 0.8% | |
0.0 | 7.7 | |
3 months ago | 14 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
compute-shader-101
-
wgpu-rs resources for computing purposes only
You might find compute shader 101 useful.
- Vulkan terms vs. Direct3D 12 (aka DirectX 12) terms
-
WGPU setup and compute shader feedback - and Tutorial.
Compute Shader 101 - Github, Video, Slideshow. additional resources at end of slide show.
-
Compute Shaders and Rust - looking for some guidance.
Yes, compute-shader-101 is sample code + video + slides.
-
Prefix sum on portable compute shaders
Workgroup in Vulkan/WebGPU lingo is equivalent to "thread block" in CUDA speak; see [1] for a decoder ring.
> Using atomics to solve this is rarely a good idea, atomics will make things go slowly, and there is often a way to restructure the problem so that you can let threads read data from a previous dispatch, and break your pipeline into more dispatches if necessary.
This depends on the exact workload, but I disagree. A multiple dispatch solution to prefix sum requires reading the input at least twice, while decoupled look-back is single pass. That's a 1.5x difference if you're memory saturated, which is a good assumption here.
The Nanite talk (which I linked) showed a very similar result, for very similar reasons. They have a multi-dispatch approach to their adaptive LOD resolver, and it's about 25% slower than the one that uses atomics to manage the job queue.
Thus, I think we can solidly conclud that atomics are an essential part of the toolkit for GPU compute.
You do make an important distinction between runtime and development environment, and I should fix that, but there's still a point to be made. Most people doing machine learning work need a dev environment (or use Colab), even if they're theoretically just consuming GPU code that other people wrote. And if you do distribute a CUDA binary, it only runs on Nvidia. By contrast, my stuff is a 20-second "cargo build" and you can write your own GPU code with very minimal additional setup.
[1]: https://github.com/googlefonts/compute-shader-101/blob/main/...
-
Compute shaders - where to learn more outside of unity
googlefonts/compute-shader-101: Sample code for compute shader 101 training (github.com)
-
Vulkan Memory Allocator
I agree strongly with you about the need for good resources. Here are a few I've found that are useful.
* A trip through the Graphics Pipeline[1] is slightly dated (10 years old) but still very relevant.
* If you're interested in compute shaders specifically, I've put together "compute shader 101"
* Alyssa Rosenzweig's posts[3] on reverse engineering GPUs casts a lot of light on how they work at a low level. It helps to have a big-picture understanding first.
I think there is demand for a good book on this topic.
[1]: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...
[2]: https://github.com/googlefonts/compute-shader-101
[3]: https://rosenzweig.io/
-
Compute shader 101 (video and slides)
This is a talk I've been working on for a while. It starts off motivating why you might want to write compute shaders (tl;dr you can exploit the impressive compute power of GPUs but portably), then explains the basics of how, including some sample code to help get people started.
Slides: https://docs.google.com/presentation/d/1dVSXORW6JurLUcx5UhE1...
Sample code: https://github.com/googlefonts/compute-shader-101
Feedback is welcome (please file issues against the open source repo), and AMA in this thread.
rust-gpu
-
Vcc – The Vulkan Clang Compiler
Sounds cool, but this requires yet another language to learn[0]. As someone who only has limited knowledge in this space, could someone tell me how comparable is the compute functionality of rust-gpu[1], where I can just write rust?
[0] https://github.com/Hugobros3/shady#language-syntax
[1] https://github.com/EmbarkStudios/rust-gpu
-
Candle: Torch Replacement in Rust
I don't do anything related to data science, but I feel like doing it in Rust would be nice.
You get operator overloading, so you can have ergonomic matrix operations that are typed also. Processing data on the CPU is fast, and crates like https://github.com/EmbarkStudios/rust-gpu make it very ergonomic to leverage the GPU.
I like this library for creating typed coordinate spaces for graphics programming (https://github.com/servo/euclid), I imagine something similar could be done to create refined types for matrices so you don't do matrix multiplication matrices of invalid sizes
-
What's the coolest Rust project you've seen that made you go, 'Wow, I didn't know Rust could do that!'?
Do you mean rust-gpu?
-
How a Nerdsnipe Led to a Fast Implementation of Game of Life
And https://github.com/EmbarkStudios/rust-gpu/tree/main/examples with the wgpu runner (here it runs the compute shader)
-
What is Rust's potential in game development?
I don't know how major they are considered, but Embark Studios is doing quite a bit of Rust in the open source space, most notably (IMO) rust-gpu and kajiya
-
[rust-gpu] How do I run/build my own shaders locally?
The examples in the rust-gpu repository are a good place to start
-
Posh: Type-Safe Graphics Programming in Rust
There's another project that's similar that's being used by an actual game company: https://github.com/EmbarkStudios/rust-gpu
They see specific advantages here that would outweigh that negative. It's not my space (I play games, but know next to nothing about graphics programming), but there's at least one argument in the other direction.
-
Introducing posh: Type-Safe Graphics Programming in Rust
Could this approach work for compute shaders (GPGPU) as well? So far, I think https://github.com/EmbarkStudios/rust-gpu is the state of the art in that area, but it adds a specific Rust compiler backend for generating SPIR-V rather than leaving that up to the driver. That seems more complicated than it needs to be... but maybe it has advantages too? Thoughts?
-
Looking for high level GPU computing crate
https://github.com/embarkstudios/rust-gpu Allows you to create shaders (kernals) in Rust.
-
With what languages are video games like League of Legends (most likely) programmed?
Also Embark Studios (formers DICE people) is doing a lot of work with Rust, all open source like Rust GPU https://github.com/EmbarkStudios/rust-gpu
What are some alternatives?
raylib - A simple and easy-to-use library to enjoy videogames programming
llama.cpp - LLM inference in C/C++
emscripten - Emscripten: An LLVM-to-WebAssembly Compiler
wgpu - Cross-platform, safe, pure-rust graphics api.
strange-attractors
Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
vello - An experimental GPU compute-centric 2D renderer.
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
gpgpu-rs - Simple experimental async GPGPU framework for Rust
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
Vulkan-Guide - One stop shop for getting started with the Vulkan API
DiligentEngine - A modern cross-platform low-level graphics library and rendering framework