wgpu-mm
burn
wgpu-mm | burn | |
---|---|---|
1 | 9 | |
50 | 7,169 | |
- | 6.3% | |
8.7 | 9.8 | |
about 2 months ago | 4 days ago | |
WGSL | Rust | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wgpu-mm
-
Chrome Ships WebGPU
This is very exciting! (I had suspected it would slip to 114)
WebGPU implementations are still pretty immature, but certainly enough to get started with. I've been implementing a Rust + WebGPU ML runtime for the past few months and have enjoyed writing WGSL.
I recently got a 250M parameter LLM running in the browser without much optimisation and it performs pretty well! (https://twitter.com/fleetwood___/status/1638469392794091520)
That said, matmuls are still pretty handicapped in the browser (especially considering the bounds checking enforced in the browser). From my benchmarking I've struggled to hit 50% of theoretical FLOPS, which is cut down to 30% when the bounds checking comes in. (Benchmarks here: https://github.com/FL33TW00D/wgpu-mm)
I look forward to accessing shader cores as they mentioned in the post.
burn
-
3 years of fulltime Rust game development, and why we're leaving Rust behind
You can use libtorch directly via `tch-rs`, and at present I'm porting over to Burn (see https://burn.dev) which appears incredibly promising. My impression is it's in a good place, if of course not close to the ecosystem of Python/C++. At very least I've gotten my nn models training and running without too much difficulty. (I'm moving to Burn for the thread safety - their `Tensor` impl is `Sync` - libtorch doesn't have such a guarantee.)
Burn has Candle as one of its backends, which I understand is also quite popular.
- Burn: Deep Learning Framework built using Rust
-
Transitioning From PyTorch to Burn
[package] name = "resnet_burn" version = "0.1.0" edition = "2021" [dependencies] burn = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11", features = ["ndarray"] } burn-import = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11" } image = { version = "0.24.7", features = ["png", "jpeg"] }
- Burn Deep Learning Framework Release 0.12.0 Improved API and PyTorch Integration
-
Supercharge Web AI Model Testing: WebGPU, WebGL, and Headless Chrome
Great!
For Burn project, we have WebGPU example and I was looking into how we could add automated tests in the browser. Now it seems possible.
Here is the image classification example if you'd like to check out:
https://github.com/tracel-ai/burn/tree/main/examples/image-c...
-
Burn Deep Learning Framework 0.11.0 Released: Just-in-Time Automatic Kernel Fusion & Founding Announcement
Full Release Note: https://github.com/tracel-ai/burn/releases/tag/v0.11.0
- Burn Deep Learning Framework v0.11.0 Released: Just-in-Time Kernel Fusion
- Burn – comprehensive dynamic Deep Learning Framework built using Rust
- Burn: Deep Learning Framework in Rust
What are some alternatives?
SHA256-WebGPU - Implementation of sha256 in WGSL
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
stablehlo - Backward compatible ML compute opset inspired by HLO/MHLO
candle - Minimalist ML framework for Rust
wgpu-py - Next generation GPU API for Python
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
pygfx - A python render engine running on wgpu.
tch-rs - Rust bindings for the C++ api of PyTorch.
tfjs - A WebGL accelerated JavaScript library for training and deploying ML models.
rust-mlops-template - A work in progress to build out solutions in Rust for MLOPs
webgpu-blas - Fast matrix-matrix multiplication on web browser using WebGPU
llama2.rs - A fast llama2 decoder in pure Rust.