onnxruntime-rs
tract
onnxruntime-rs | tract | |
---|---|---|
2 | 20 | |
284 | 2,291 | |
0.4% | 1.9% | |
0.0 | 9.9 | |
11 months ago | 7 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache 2.0/MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnxruntime-rs
-
Deep Learning in Rust on GPU with onnxruntime-rs
I did: https://github.com/nbigaouette/onnxruntime-rs/pull/87 but the maintainer seems to be off. I sent an email.
-
Interesting results comparing TF and Rust
I have used the https://github.com/nbigaouette/onnxruntime-rs ONNX C++ wrapper on a Pytorch model, and did not see any difference in compute time between ONNX Python and ONNX Rust for GPU.
tract
-
Are there any ML crates that would compile to WASM?
Tract is the most well known ML crate in Rust, which I believe can compile to WASM - https://github.com/sonos/tract/. Burn may also be useful - https://github.com/burn-rs/burn.
-
[Discussion] What crates would you like to see?
tract!!
-
tract VS burn - a user suggested alternative
2 projects | 25 Mar 2023
-
Machine Learning Inference Server in Rust?
we use tract for inference, integrated into our runtime and services.
- onnxruntime
- Rust Native ML Frameworks?
-
Neural networks - what crates to use?
Not for training, but for inference this looks nice: https://github.com/sonos/tract
-
Brain.js: GPU Accelerated Neural Networks in JavaScript
There's also tract, from sonos[0]. 100% rust.
I'm currently trying to use it to do speech recognition with a variant of the Conformer architecture (exported to ONNX).
The final goal is to do it in WASM client-side.
[0] https://github.com/sonos/tract
-
Serving ML at the Speed of Rust
As the article notes, there isn't any official Rust-native support for any common frameworks.
tract (https://github.com/sonos/tract) seems like the most mature for ONNX (for which TF/PT export is good nowadays), and recently it successfully implemented BERT.
-
Run deep neural network models from scratch
There are some DL libraries written in Rust: https://github.com/sonos/tract , https://docs.rs/neuronika/latest/neuronika/index.html . The second one could be used for training, I think.
What are some alternatives?
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
ort - Fast ML inference & training for ONNX models in Rust
tractjs - Run ONNX and TensorFlow inference in the browser.
MTuner - MTuner is a C/C++ memory profiler and memory leak finder for Windows, PlayStation 3/4/5, Nintendo Switch, Android and other platforms
blindai - Confidential AI deployment with secure enclaves :lock:
bevy_webgl2 - WebGL2 renderer plugin for Bevy game engine
steelix - Your one stop CLI for ONNX model analysis.
linfa - A Rust machine learning framework.
deno - A modern runtime for JavaScript and TypeScript.
gamma - Computational graphs with reverse automatic differentation in the GPU