autograph
gda_compute
Our great sponsors
autograph | gda_compute | |
---|---|---|
5 | 1 | |
299 | 4 | |
- | - | |
9.2 | 0.0 | |
27 days ago | about 3 years ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autograph
-
Where to Learn Vulkan for parallel computation (with references to porting from CUDA)
I'm working on a machine learning library https://github.com/charles-r-earp/autograph implemented in Rust that uses rust-gpu to compile Rust compute shaders to spirv, and then gfx_hal to target metal and dx12. Training performance is currently about 2x slower than pytorch (cuda) on my laptop but I've made significant progress recently and I am targeting 1.5x. While rust-gpu itself has it's own restrictions, it does support inline spirv assembly, which provides direct access to operations not provided in its std lib, thus it's lower level than GLSL. For example, it should be possible to target cuda tensor cores via cooperative matrix operations (I believe Metal supports these as well but this may not be implemented in spirv-cross and certainly isn't in naga). Once I have things a bit more stabilized I'd like to provide more examples, like porting from cuda / opencl, but I'm still figuring out patterns like how to work with 16 and 8 bit types in a nice and portable way.
-
autograph v0.1.0
autograph v0.1.0
-
What's the current state of GPU compute in rust?
Working on autograph, for machine learning and neural networks. Unlike CUDA / HIP it's threadsafe, but doesn't expose low level things like multiple streams. Most of the shaders are glsl but I'm now using rust_gpu for pure rust gpu code.
-
Announcing neuronika 0.1.0, a deep learning framework in Rust
Maybe not for learning but as inspiration I have to plug this amazing effort for ML with (vulkan) shaders: https://github.com/charles-r-earp/autograph
-
What do you think about a library that helps reducing the overhead of GPU programming, regarding ndimensional Arrays?
Maybe you'd be interested in checking out my library, https://github.com/charles-r-earp/autograph?
gda_compute
-
What do you think about a library that helps reducing the overhead of GPU programming, regarding ndimensional Arrays?
As promissed: Here the link to an early version of the project: https://github.com/nattube/gda_compute
What are some alternatives?
neuronika - Tensors and dynamic neural networks in pure Rust.
rust - Empowering everyone to build reliable and efficient software.
RustaCUDA - Rusty wrapper for the CUDA Driver API
rfcs - RFCs for changes to Rust
petgraph - Graph data structure library for Rust.
heim - Cross-platform async library for system information fetching 🦀
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
riscv-rust - RISC-V processor emulator written in Rust+WASM
VkFFT - Vulkan/CUDA/HIP/OpenCL/Level Zero/Metal Fast Fourier Transform library
tblis - TBLIS is a library and framework for performing tensor operations, especially tensor contraction, using efficient native algorithms.
juice - The Hacker's Machine Learning Engine
rbspy - Sampling CPU profiler for Ruby