Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today. Learn more →
Wonnx Alternatives
Similar projects and alternatives to wonnx
-
-
-
SonarLint
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
-
-
-
-
-
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
-
-
-
-
-
-
-
-
-
DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
-
-
-
UNN.js
Deep Learning in JS. Alternative to TensorFlow and ConvNetJS, that is 4x faster.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
wonnx reviews and mentions
-
OpenXLA Is Available Now
You can indeed perform inference using WebGPU (see e.g. [1] for GPU-accelerated inference of ONNX models on WebGPU; I am one of the authors).
The point made above is that WebGPU can only be used for GPU's and not really for other types of 'neural accelerators' (like e.g. the ANE on Apple devices).
-
AMD ROCm: A Wasted Opportunity
I know this framework https://github.com/webonnx/wonnx, but I’ve never used that.
-
How to Optimize a CUDA Matmul Kernel for CuBLAS-Like Performance: A Worklog
I am curious about doing the same kind of thing for compute shaders. I'm aware of Kompute.cc (which is Vulkan based) but haven't looked at their GEMM kernels, and also of wonnx for WebGPU ([1] is their GEMM code).
I'm also curious whether warp shuffle operations might be useful to reduce some of the shared memory traffic.
[1]: https://github.com/webonnx/wonnx/blob/master/wonnx/templates...
- GPU to good use
-
Brain.js: GPU Accelerated Neural Networks in JavaScript
Thanks! it looks like the wonnx CLI itself falls back to tract to do inference on CPU if a GPU is not available[0], and it also sounds like setting up llvmpipe/lavapipe on WASM is much harder (if not impossible?) than just shipping tract, so the approach I'll take is probably a wonnx+tract approach.
[0] https://github.com/webonnx/wonnx/issues/116#issuecomment-114...
-
WebGPU – All of the cores, none of the canvas – surma.dev
Already on it! https://github.com/webonnx/wonnx - runs ONNX neural networks on top of WebGPU (in the browser, or using wgpu on top of Vulkan/DX12/..)
-
WONNX: Deep Learning on WebGPU using the ONNX format.
Wonnx is my first crate and it aims at doing fast deep learning on any GPU. It leverages wgpu for doing WebGPU computation and is in 100% Rust. Git: https://github.com/haixuanTao/wonnx
-
A note from our sponsor - SonarLint
www.sonarlint.org | 28 Mar 2023
Stats
webonnx/wonnx is an open source project licensed under MIT License which is an OSI approved license.