onnxruntime-rs
tract
Our great sponsors
onnxruntime-rs | tract | |
---|---|---|
2 | 20 | |
261 | 2,046 | |
- | 2.7% | |
0.0 | 9.8 | |
about 1 month ago | 7 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache 2.0/MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnxruntime-rs
-
Deep Learning in Rust on GPU with onnxruntime-rs
I did: https://github.com/nbigaouette/onnxruntime-rs/pull/87 but the maintainer seems to be off. I sent an email.
-
Interesting results comparing TF and Rust
I have used the https://github.com/nbigaouette/onnxruntime-rs ONNX C++ wrapper on a Pytorch model, and did not see any difference in compute time between ONNX Python and ONNX Rust for GPU.
tract
-
Are there any ML crates that would compile to WASM?
Tract is the most well known ML crate in Rust, which I believe can compile to WASM - https://github.com/sonos/tract/. Burn may also be useful - https://github.com/burn-rs/burn.
-
[Discussion] What crates would you like to see?
tract!!
-
tract VS burn - a user suggested alternative
2 projects | 25 Mar 2023
-
Machine Learning Inference Server in Rust?
we use tract for inference, integrated into our runtime and services.
- onnxruntime
- Rust Native ML Frameworks?
-
Neural networks - what crates to use?
Not for training, but for inference this looks nice: https://github.com/sonos/tract
-
Brain.js: GPU Accelerated Neural Networks in JavaScript
There's also tract, from sonos[0]. 100% rust.
I'm currently trying to use it to do speech recognition with a variant of the Conformer architecture (exported to ONNX).
The final goal is to do it in WASM client-side.
-
Serving ML at the Speed of Rust
As the article notes, there isn't any official Rust-native support for any common frameworks.
tract (https://github.com/sonos/tract) seems like the most mature for ONNX (for which TF/PT export is good nowadays), and recently it successfully implemented BERT.
-
Run deep neural network models from scratch
There are some DL libraries written in Rust: https://github.com/sonos/tract , https://docs.rs/neuronika/latest/neuronika/index.html . The second one could be used for training, I think.
What are some alternatives?
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
MTuner - MTuner is a C/C++ memory profiler and memory leak finder for Windows, PlayStation 4 and 3, Android and other platforms
deno - A modern runtime for JavaScript and TypeScript.
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
ort - A Rust wrapper for ONNX Runtime
ncurses-rs - A low-level ncurses wrapper for Rust
csl-mobile-bridge - React-native bindings for Emurgo's cardano-serialization-lib (Cardano haskell Shelley)
linfa - A Rust machine learning framework.
tractjs - Run ONNX and TensorFlow inference in the browser.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
blindai - Confidential AI deployment with secure enclaves :lock:
tangram - Tangram makes it easy for programmers to train, deploy, and monitor machine learning models.