Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge. Learn more →
Compute-shader-101 Alternatives
Similar projects and alternatives to compute-shader-101
-
rust-gpu
🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
-
raylib
A simple and easy-to-use library to enjoy videogames programming
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
-
-
-
Mergify
Updating dependencies is time-consuming.. Solutions like Dependabot or Renovate update but don't merge dependencies. You need to do it manually while it could be fully automated! Add a Merge Queue to your workflow and stop caring about PR management & merging. Try Mergify for free.
-
-
Vulkan-Guide
One stop shop for getting started with the Vulkan API
-
compute-shader-101 reviews and mentions
-
wgpu-rs resources for computing purposes only
You might find compute shader 101 useful.
- Vulkan terms vs. Direct3D 12 (aka DirectX 12) terms
-
WGPU setup and compute shader feedback - and Tutorial.
Compute Shader 101 - Github, Video, Slideshow. additional resources at end of slide show.
-
Compute Shaders and Rust - looking for some guidance.
Yes, compute-shader-101 is sample code + video + slides.
-
Prefix sum on portable compute shaders
Workgroup in Vulkan/WebGPU lingo is equivalent to "thread block" in CUDA speak; see [1] for a decoder ring.
> Using atomics to solve this is rarely a good idea, atomics will make things go slowly, and there is often a way to restructure the problem so that you can let threads read data from a previous dispatch, and break your pipeline into more dispatches if necessary.
This depends on the exact workload, but I disagree. A multiple dispatch solution to prefix sum requires reading the input at least twice, while decoupled look-back is single pass. That's a 1.5x difference if you're memory saturated, which is a good assumption here.
The Nanite talk (which I linked) showed a very similar result, for very similar reasons. They have a multi-dispatch approach to their adaptive LOD resolver, and it's about 25% slower than the one that uses atomics to manage the job queue.
Thus, I think we can solidly conclud that atomics are an essential part of the toolkit for GPU compute.
You do make an important distinction between runtime and development environment, and I should fix that, but there's still a point to be made. Most people doing machine learning work need a dev environment (or use Colab), even if they're theoretically just consuming GPU code that other people wrote. And if you do distribute a CUDA binary, it only runs on Nvidia. By contrast, my stuff is a 20-second "cargo build" and you can write your own GPU code with very minimal additional setup.
[1]: https://github.com/googlefonts/compute-shader-101/blob/main/...
-
Compute shaders - where to learn more outside of unity
googlefonts/compute-shader-101: Sample code for compute shader 101 training (github.com)
-
Vulkan Memory Allocator
I agree strongly with you about the need for good resources. Here are a few I've found that are useful.
* A trip through the Graphics Pipeline[1] is slightly dated (10 years old) but still very relevant.
* If you're interested in compute shaders specifically, I've put together "compute shader 101"
* Alyssa Rosenzweig's posts[3] on reverse engineering GPUs casts a lot of light on how they work at a low level. It helps to have a big-picture understanding first.
I think there is demand for a good book on this topic.
[1]: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...
-
Compute shader 101 (video and slides)
This is a talk I've been working on for a while. It starts off motivating why you might want to write compute shaders (tl;dr you can exploit the impressive compute power of GPUs but portably), then explains the basics of how, including some sample code to help get people started.
Slides: https://docs.google.com/presentation/d/1dVSXORW6JurLUcx5UhE1...
Sample code: https://github.com/googlefonts/compute-shader-101
Feedback is welcome (please file issues against the open source repo), and AMA in this thread.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 26 Sep 2023
Stats
googlefonts/compute-shader-101 is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of compute-shader-101 is Rust.