rust-gpu
gpuweb
Our great sponsors
rust-gpu | gpuweb | |
---|---|---|
82 | 56 | |
6,876 | 4,533 | |
2.7% | 2.1% | |
8.2 | 9.0 | |
8 days ago | 8 days ago | |
Rust | Bikeshed | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rust-gpu
-
Vcc β The Vulkan Clang Compiler
Sounds cool, but this requires yet another language to learn[0]. As someone who only has limited knowledge in this space, could someone tell me how comparable is the compute functionality of rust-gpu[1], where I can just write rust?
-
Candle: Torch Replacement in Rust
I don't do anything related to data science, but I feel like doing it in Rust would be nice.
You get operator overloading, so you can have ergonomic matrix operations that are typed also. Processing data on the CPU is fast, and crates like https://github.com/EmbarkStudios/rust-gpu make it very ergonomic to leverage the GPU.
I like this library for creating typed coordinate spaces for graphics programming (https://github.com/servo/euclid), I imagine something similar could be done to create refined types for matrices so you don't do matrix multiplication matrices of invalid sizes
-
What's the coolest Rust project you've seen that made you go, 'Wow, I didn't know Rust could do that!'?
Do you mean rust-gpu?
-
How a Nerdsnipe Led to a Fast Implementation of Game of Life
And https://github.com/EmbarkStudios/rust-gpu/tree/main/examples with the wgpu runner (here it runs the compute shader)
-
What is Rust's potential in game development?
I don't know how major they are considered, but Embark Studios is doing quite a bit of Rust in the open source space, most notably (IMO) rust-gpu and kajiya
-
[rust-gpu] How do I run/build my own shaders locally?
Long story short, I watched this video about shader programs, and got inspired to try writing shader programs in Rust. I came across rust-gpu which seems to fit the bill. My problem is that I don't really understand its documentation for this particular part.
The examples in the rust-gpu repository are a good place to start
-
Introducing posh: Type-Safe Graphics Programming in Rust
Could this approach work for compute shaders (GPGPU) as well? So far, I think https://github.com/EmbarkStudios/rust-gpu is the state of the art in that area, but it adds a specific Rust compiler backend for generating SPIR-V rather than leaving that up to the driver. That seems more complicated than it needs to be... but maybe it has advantages too? Thoughts?
-
Looking for high level GPU computing crate
https://github.com/embarkstudios/rust-gpu Allows you to create shaders (kernals) in Rust.
-
Whatβs everyone working on this week (19/2023)?
My focus this week has been on planning and assisting with architecture changes that help us fully integrate the node graph compositing engine we built over the past year. Upcoming tech we're working towards includes full utilization of the node graph to power all our layers; adopting WebGPU (now out in Chrome) into our editor using WGPU and rust-gpu; setting up a Tauri build and deployment; and building a backend web service to handle user accounts for document storage, hosting rustc to compile custom graphics effect nodes written by users in Rust, and Stable Diffusion image generation on AWS EC2 GPU instances.
gpuweb
-
WebGPU now available for testing in Safari Technology Preview
People keep spreading this incredibly misleading statement, and yours is even more misleading (suggesting Apple opposed a 'GPU WASM')
By all accounts, Apple's /only/ stance was that if WebGPU used SPIR-V it would be a non-starter for them, due to ongoing legal issues between Apple and Khronos.
Apple actually proposed WebHLSL in collaboration with Microsoft, to have HLSL be the standard.
Mozilla employee's stance[0] was that SPIRV was too low level, did not fit with the goals of WebGPU portability and security, and expressed concern that Khronos may add functionality to SPIRV they cannot support in WebGPU like raytracing instructions .. 'So we'd always be on the verge of forking SPIR-V in some way.'
It was also noted by many people that even if a bytecode format was used, it would still have to be translated to the target (HLSL/DXIL, MSL, etc.) in almost the same way a text format would.
Nobody proposed a 'GPU WASM equivalent' or an alternative bytecode format.
The hard truth is that shader compilation is a fucking nightmare, people do not realize how bad it is across the different native APIs. SPIR-V is good, but it doesn't solve that - and presents other challenges if you are a web browser API. Vulkan and SPIRV are not the golden goose many make them out to be.
[0] https://github.com/gpuweb/gpuweb/issues/847#issuecomment-642...
-
Show HN: WebGPU Particles Simulation
Yes it is still a bit new. WebGPU is not finished and is still being worked on: https://webgpu.io/
-
Capturing the WebGPU Ecosystem
There's a proposal for a "WebGPU compatibility mode" which also works on older devices:
WebGPU currently doesn't support the "bindless" resource access model (see: https://github.com/gpuweb/gpuweb/issues/380).
The "max number of sampled texture per shader stage" is a runtime device limit, and the minimal value for that seems to be 16. So texture atlasses are still a thing in WebGPU.
-
Why aren't we using highly efficient int8 calcualtions in quants? (maybe eli14?)
There's even an implementation under discussion to have the dp4a instruction added to WebGPU (https://github.com/gpuweb/gpuweb/issues/2677)
- WebGPU β All of the cores, none of the canvas
- [Rust_Gamedev] WGSL est-il un bon choix?
-
I want to talk about WebGPU
Shared memory, yes, with the goodies: atomics and barriers. We rely on that heavily in Vello, so we've pushed very hard on it. For example, WebGPU introduces the "workgroupUniformLoad" built-in, which lets you broadcast a value to all threads in the workgroup while not introducing potential unsafety.
Tensor cores: I can't say there are plans to add it, but it's certainly something I would like to see. You need subgroups in place first, and there's been quite a bit of discussion[1] on that as a likely extension post-1.0.
- Chrome ships WebGPU (available by default in Chrome 113)
What are some alternatives?
llama.cpp - LLM inference in C/C++
wgpu - Cross-platform, safe, pure-rust graphics api.
Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
DiligentEngine - A modern cross-platform low-level graphics library and rendering framework
wgsl.vim - WGSL syntax highlight for vim
compute-shader-101 - Sample code for compute shader 101 training
bevy - A refreshingly simple data-driven game engine built in Rust
hash-shader - SHA256 WebGPU Compute Shader (Kernel) Written in Rust
pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly
makepad - Makepad is a creative software development platform for Rust that compiles to wasm/webGL, osx/metal, windows/dx11 linux/opengl