tractjs
onnxruntime-rs
tractjs | onnxruntime-rs | |
---|---|---|
1 | 2 | |
75 | 287 | |
- | 0.7% | |
0.0 | 0.0 | |
about 2 years ago | about 1 year ago | |
Rust | Rust | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tractjs
-
Run WASM, a client side Python runtime
Tensorflow (and by extension Keras) offload most of the actual work to C++ or C, so having those compile to WebAssembly would (I imagine) be a herculean effort.
Instead, The TF team maintains TFJS, which can run on WebAssembly[0].
There are also tractjs[1], and onnyxjs[2], both of which allow you to run (most) ONNX models (which is an open standard for specifying ML models) using WebAssembly and WebGL(only onnyxjs supports WebGL). A bunch of frameworks (caffe, pytorch, TF) support exporting to/importing from ONNX.
[0] https://blog.tensorflow.org/2020/03/introducing-webassembly-...
[1] https://github.com/bminixhofer/tractjs
[2] https://github.com/microsoft/onnxjs
onnxruntime-rs
-
Deep Learning in Rust on GPU with onnxruntime-rs
I did: https://github.com/nbigaouette/onnxruntime-rs/pull/87 but the maintainer seems to be off. I sent an email.
-
Interesting results comparing TF and Rust
I have used the https://github.com/nbigaouette/onnxruntime-rs ONNX C++ wrapper on a Pytorch model, and did not see any difference in compute time between ONNX Python and ONNX Rust for GPU.
What are some alternatives?
neuronika - Tensors and dynamic neural networks in pure Rust.
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
kosmonaut - A web browser engine for the space age :rocket:
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
ort - Fast ML inference & training for ONNX models in Rust