smarty_pants
autograph
Our great sponsors
smarty_pants | autograph | |
---|---|---|
1 | 5 | |
3 | 299 | |
- | - | |
0.0 | 9.2 | |
about 2 years ago | 27 days ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smarty_pants
-
I wrote a Neural Network library.
The idea is that you can simply use this crate with your project to easily train a neural network using your project. The library supports creating, training, parsing, and running. It may gain more functionality in the future. As it stands it's quite small and pretty fast with 5 NeuralNetworks taking nano-seconds to train 1000 generations in the example program. I've tried to make sure that it is "complete" and as such, I've documented nearly every function, method, and struct. I've also written an example project and tried to make it relatively easy to use.
autograph
-
Where to Learn Vulkan for parallel computation (with references to porting from CUDA)
I'm working on a machine learning library https://github.com/charles-r-earp/autograph implemented in Rust that uses rust-gpu to compile Rust compute shaders to spirv, and then gfx_hal to target metal and dx12. Training performance is currently about 2x slower than pytorch (cuda) on my laptop but I've made significant progress recently and I am targeting 1.5x. While rust-gpu itself has it's own restrictions, it does support inline spirv assembly, which provides direct access to operations not provided in its std lib, thus it's lower level than GLSL. For example, it should be possible to target cuda tensor cores via cooperative matrix operations (I believe Metal supports these as well but this may not be implemented in spirv-cross and certainly isn't in naga). Once I have things a bit more stabilized I'd like to provide more examples, like porting from cuda / opencl, but I'm still figuring out patterns like how to work with 16 and 8 bit types in a nice and portable way.
-
autograph v0.1.0
autograph v0.1.0
-
What's the current state of GPU compute in rust?
Working on autograph, for machine learning and neural networks. Unlike CUDA / HIP it's threadsafe, but doesn't expose low level things like multiple streams. Most of the shaders are glsl but I'm now using rust_gpu for pure rust gpu code.
-
Announcing neuronika 0.1.0, a deep learning framework in Rust
Maybe not for learning but as inspiration I have to plug this amazing effort for ML with (vulkan) shaders: https://github.com/charles-r-earp/autograph
-
What do you think about a library that helps reducing the overhead of GPU programming, regarding ndimensional Arrays?
Maybe you'd be interested in checking out my library, https://github.com/charles-r-earp/autograph?
What are some alternatives?
crates.io - The Rust package registry
neuronika - Tensors and dynamic neural networks in pure Rust.
RustaCUDA - Rusty wrapper for the CUDA Driver API
petgraph - Graph data structure library for Rust.
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
VkFFT - Vulkan/CUDA/HIP/OpenCL/Level Zero/Metal Fast Fourier Transform library
juice - The Hacker's Machine Learning Engine
ocl - OpenCL for Rust
tractjs - Run ONNX and TensorFlow inference in the browser.
vuda - VUDA is a header-only library based on Vulkan that provides a CUDA Runtime API interface for writing GPU-accelerated applications.
are-we-learning-yet - How ready is Rust for Machine Learning?
blub - 3D fluid simulation experiments in Rust, using WebGPU-rs (WIP)