wonnx
onnxruntime
wonnx | onnxruntime | |
---|---|---|
18 | 55 | |
1,503 | 12,960 | |
5.3% | 4.4% | |
6.3 | 10.0 | |
about 2 months ago | about 9 hours ago | |
Rust | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wonnx
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
The two I know of are IREE and Kompute[1]. I'm not sure how much momentum the latter has, I don't see it referenced much. There's also a growing body of work that uses Vulkan indirectly through WebGPU. This is currently lagging in performance due to lack of subgroups and cooperative matrix mult, but I see that gap closing. There I think wonnx[2] has the most momentum, but I am aware of other efforts.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
-
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
[3]: https://github.com/openxla/iree
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
There's also a third-party WebGPU implementation: https://github.com/webonnx/wonnx
-
Are there any ML crates that would compile to WASM?
By experimental I meant e.g. using WGPU to run compute shaders like wonnx, which is working fine but only on a very restricted set of devices and browsers.
- WebGPU ONNX inference runtime written in Rust
-
PyTorch Primitives in WebGPU for the Browser
https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...
microsoft/onnxruntime: https://github.com/microsoft/onnxruntime
Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?
FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.
What's a fair benchmark?
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
-
I want to talk about WebGPU
> GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?
Have a look at wonnix https://github.com/webonnx/wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
Chrome Ships WebGPU
Looking forward to your WebGPU ML runtime! Also, why not contribute back to WONNX? (https://github.com/webonnx/wonnx)
-
OpenXLA Is Available Now
You can indeed perform inference using WebGPU (see e.g. [1] for GPU-accelerated inference of ONNX models on WebGPU; I am one of the authors).
The point made above is that WebGPU can only be used for GPU's and not really for other types of 'neural accelerators' (like e.g. the ANE on Apple devices).
[1] https://github.com/webonnx/wonnx
onnxruntime
-
Machine Learning with PHP
ONNX Runtime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
AI Inference now available in Supabase Edge Functions
Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.
-
Deep Learning in JavaScript
tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browsdr.
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
[1]: https://www.circle-lang.org/
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- ONNX runtime: Cross-platform accelerated machine learning
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
What are some alternatives?
stablehlo - Backward compatible ML compute opset inspired by HLO/MHLO
onnx - Open standard for machine learning interoperability
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
onnx-simplifier - Simplify your onnx model
iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
onnx-tensorflow - Tensorflow Backend for ONNX
blaze - A Rustified OpenCL Experience
MLflow - Open source platform for the machine learning lifecycle