Our great sponsors
-
Nice and simple. Its quite portable too. But simplicity and ease of use come with some limitations. Ash is much more complex but can extract every bit of power from your card if needed. Wgpu-rs github comes with many examples and you can find a really nice tutorial here
-
The book guide from rust-gpu was important to bootstrap my experience with Rust-gpu, but as you progress you'll soon realise that it is very, very limited. The code source repository of EmbarkStudio/rust-gpu is full of informations. Specifically this more or less up-to-date support page
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
-
wgpu also supports compute shaders, though I haven't used it yet
-
Albeit on a rather small-scale case, I have ben pretty happy interfacing Rust with Futhark kernels through the C ABI. I don't know how far you can go with that, but it worked nicely for me.
-
Nice and simple. Its quite portable too. But simplicity and ease of use come with some limitations. Ash is much more complex but can extract every bit of power from your card if needed. Wgpu-rs github comes with many examples and you can find a really nice tutorial here
-
SPIRV-LLVM-Translator
A tool and a library for bi-directional translation between SPIR-V and LLVM IR
Of course, CUDA is field tested framework to do job done. But it is interesting to look at other perspective technologies. Are there many differences between NVIDIA PTX and SPIR-V? Isn't possible to compile (Rust, C++, CUDA, etc.) to LLVM IR, then use tool like LLVM to SPIR-V translator and have GPU code that runs on any vendor card?
-
kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
Other API things, like command buffer recording, memory buffer allocation/copy, could be abstracted from Vulkan. There were attempts by few people, most notable Kompute, but they use GLSL.
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
Wgpu has performance close to Vulkan in some cases, but still no compute performance benchmarks. Just curios, why wgpu couldn't just make extension to use advanced atomics, multiple queues or other high-performance feature in hardware that supports it? One of Vulkan and wgpu goals are to make it highly customizable through extensions. edit: actually already there is wgpu extension that supports (subgroup operations)[https://github.com/gpuweb/gpuweb/pull/1459]
-
rust-gpu-compute-example
Minimal example of using rust-gpu and wgpu to dispatch compute shaders written in rust.
Basic rust-gpu compute example : One very simple example to setup a compute shader with Wgpu and rust-gpu.
-
DJMcNab compute shader examples: A great set of examples for rust-gpu and compute shaders.