wonnx
SHARK
Our great sponsors
wonnx | SHARK | |
---|---|---|
18 | 84 | |
1,487 | 1,381 | |
6.8% | 4.1% | |
6.5 | 9.6 | |
26 days ago | 7 days ago | |
Rust | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wonnx
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
The two I know of are IREE and Kompute[1]. I'm not sure how much momentum the latter has, I don't see it referenced much. There's also a growing body of work that uses Vulkan indirectly through WebGPU. This is currently lagging in performance due to lack of subgroups and cooperative matrix mult, but I see that gap closing. There I think wonnx[2] has the most momentum, but I am aware of other efforts.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
-
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
[3]: https://github.com/openxla/iree
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
There's also a third-party WebGPU implementation: https://github.com/webonnx/wonnx
-
Are there any ML crates that would compile to WASM?
By experimental I meant e.g. using WGPU to run compute shaders like wonnx, which is working fine but only on a very restricted set of devices and browsers.
- WebGPU ONNX inference runtime written in Rust
-
PyTorch Primitives in WebGPU for the Browser
https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...
microsoft/onnxruntime: https://github.com/microsoft/onnxruntime
Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?
FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.
What's a fair benchmark?
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
-
I want to talk about WebGPU
> GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?
Have a look at wonnix https://github.com/webonnx/wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
Chrome Ships WebGPU
Looking forward to your WebGPU ML runtime! Also, why not contribute back to WONNX? (https://github.com/webonnx/wonnx)
-
OpenXLA Is Available Now
You can indeed perform inference using WebGPU (see e.g. [1] for GPU-accelerated inference of ONNX models on WebGPU; I am one of the authors).
The point made above is that WebGPU can only be used for GPU's and not really for other types of 'neural accelerators' (like e.g. the ANE on Apple devices).
[1] https://github.com/webonnx/wonnx
SHARK
- Llama 2 on ONNX runs locally
-
[D] Confusion over AMD GPU Ai benchmarking
https://github.com/AUTOMATIC1111/stable-diffusion-webui, https://github.com/nod-ai/SHARK, those are the repos for the open source tools mentioned. u/CeFurkan has really nice tutorial videos on YouTube for stable diffusion. Automatic1111 is the most popular open source stable diffusion ui and has the biggest open source plug-in ecosystem currently. Nvidia’s compute driver is separate from normal driver and called cuda. Amd’s compute driver is called rocm. Most windows programs like games use apis like directx, Vulkan,metal, web gpu and not cuda. Most ml code was originally intended to run in on scientific computing systems that were Linux. Today the traditional windows gpu apis are tying to get better at gpu ml supports. Amd has no official windows ml code support and is Hoping that other developers figure it out for them but amd made their ml driver open source but no support for consumer graphics cards. Nvidia is proprietary ml driver but guaranteed support across all cards including consumer
-
Amd Gpu not utilised
I got it working using SHARK with an AMD RX 480 on Windows 10.
-
New to SD - Slow working
Here the link for shark, faster (uses vulkan) than automatic1111 with directml but has less functions https://github.com/nod-ai/SHARK
-
7900 XTX Stable Diffusion Shark Nod Ai performance on Windows 10. Seem to have gotten a bump with the latest prerelease drivers 23.10.01.41
I would recommend trying out Nod AI's Shark (That is the link for the most recent 786.exe release), and see how it works for you. From others I've read, it does 512x512 pics at around 3 it/s, which I know isn't mind blowing, but it's good enough to do a pic in about 30 seconds.
-
New here
Problem solve, i had it to work i simply put this nod's ai shark exe in my stabble diffusion folder and launch it instead of Webui-user -> Release nod.ai SHARK 20230623.786 · nod-ai/SHARK (github.com)
-
I built the easiest-to-use desktop application for running Stable Diffusion on your PC - and it's free for all of you
How does it compare with Shark SD (I am not affiliated with it in any way)? (https://github.com/nod-ai/SHARK)
-
after changing GPU from RX 470 4gb to RTX 3060 12GB, I decided to make a few cozy houses, and these are a few of them
you should if you want to run SD on your card https://github.com/nod-ai/SHARK
-
20 minute load time per image on high end pc?
Forgive me for not reading you whole comment. I suspect you're version of the SD eb UI doesn't recognize the AMD GPU., so you're using the CPU. AMD GPUs only work with a few web UIs. Try Nod.ai's Shark variant
- AMD support for Microsoft® DirectML optimization of Stable Diffusion
What are some alternatives?
stablehlo - Backward compatible ML compute opset inspired by HLO/MHLO
stable-diffusion-webui - Stable Diffusion web UI
onnx - Open standard for machine learning interoperability
stable-diffusion-webui-directml - Stable Diffusion web UI
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
AMD-Stable-Diffusion-ONNX-FP16 - Example code and documentation on how to get FP16 models running with ONNX on AMD GPUs [Moved to: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16]
blaze - A Rustified OpenCL Experience
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.