wonnx VS kernel_tuner

Compare wonnx vs kernel_tuner and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
wonnx kernel_tuner
18 4
1,487 242
6.8% 9.5%
6.5 9.0
25 days ago 5 days ago
Rust Python
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

wonnx

Posts with mentions or reviews of wonnx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-14.
  • Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
    13 projects | news.ycombinator.com | 14 Dec 2023
    The two I know of are IREE and Kompute[1]. I'm not sure how much momentum the latter has, I don't see it referenced much. There's also a growing body of work that uses Vulkan indirectly through WebGPU. This is currently lagging in performance due to lack of subgroups and cooperative matrix mult, but I see that gap closing. There I think wonnx[2] has the most momentum, but I am aware of other efforts.

    [1]: https://kompute.cc/

    [2]: https://github.com/webonnx/wonnx

  • VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
    7 projects | news.ycombinator.com | 2 Aug 2023
    To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.

    [1]: https://kompute.cc/

    [2]: https://github.com/webonnx/wonnx

    [3]: https://github.com/openxla/iree

  • Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
    5 projects | news.ycombinator.com | 25 Jul 2023
    There's also a third-party WebGPU implementation: https://github.com/webonnx/wonnx
  • Are there any ML crates that would compile to WASM?
    3 projects | /r/rust | 3 Jul 2023
    By experimental I meant e.g. using WGPU to run compute shaders like wonnx, which is working fine but only on a very restricted set of devices and browsers.
  • WebGPU ONNX inference runtime written in Rust
    1 project | news.ycombinator.com | 23 May 2023
  • PyTorch Primitives in WebGPU for the Browser
    12 projects | news.ycombinator.com | 19 May 2023
    https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...

    microsoft/onnxruntime: https://github.com/microsoft/onnxruntime

    Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...

    Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h

    Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?

    FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.

    What's a fair benchmark?

  • rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
    4 projects | /r/rust | 10 May 2023
    wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
  • I want to talk about WebGPU
    15 projects | news.ycombinator.com | 3 May 2023
    > GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?

    Have a look at wonnix https://github.com/webonnx/wonnx

    A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web

  • Chrome Ships WebGPU
    17 projects | news.ycombinator.com | 6 Apr 2023
    Looking forward to your WebGPU ML runtime! Also, why not contribute back to WONNX? (https://github.com/webonnx/wonnx)
  • OpenXLA Is Available Now
    5 projects | news.ycombinator.com | 9 Mar 2023
    You can indeed perform inference using WebGPU (see e.g. [1] for GPU-accelerated inference of ONNX models on WebGPU; I am one of the authors).

    The point made above is that WebGPU can only be used for GPU's and not really for other types of 'neural accelerators' (like e.g. the ANE on Apple devices).

    [1] https://github.com/webonnx/wonnx

kernel_tuner

Posts with mentions or reviews of kernel_tuner. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-12.
  • Ask HN: What apps have you created for your own use?
    212 projects | news.ycombinator.com | 12 Dec 2023
    I've created Kernel Tuner (https://github.com/KernelTuner/kernel_tuner) as a small software development tool, because I was writing a lot of CUDA and OpenCL kernels at the time. I didn't want to manually figure out what best thread block dimensions and work division among threads were on every GPU over and over again.

    The tool evolved quite a bit since the first versions. I'm also using it for testing GPU code, teaching, and it has become one of the main drivers behind a lot of the research that I do.

  • PhD'ers, what are you working on? What CS topics excite you?
    2 projects | /r/computerscience | 17 Jan 2023
    We have an open science policy, so anyone can use our framework yourself to optimize stuff, if you want! The original paper is linked at the bottom of the GitHub page.
  • How to Optimize a CUDA Matmul Kernel for CuBLAS-Like Performance: A Worklog
    5 projects | news.ycombinator.com | 4 Jan 2023
    This is a great post for people who are new to optimizing GPU code.

    It is interesting to see that the author got this far without interchanging the innermost loop over k to the outermost loop, as is done in CUTLASS (https://github.com/NVIDIA/cutlass).

    As you can see in this blog post the code ends up with a lot of compile-time constants (e.g. BLOCKSIZE, BM, BN, BK, TM, TN) one way to optimize this code further is to use an auto-tuner to find the optimal value for all of these parameters for your GPU and problem size, for example Kernel Tuner (https://github.com/KernelTuner/kernel_tuner)

  • Kernel Tuner
    1 project | news.ycombinator.com | 30 Apr 2021

What are some alternatives?

When comparing wonnx and kernel_tuner you can also consider the following projects:

stablehlo - Backward compatible ML compute opset inspired by HLO/MHLO

halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator

onnx - Open standard for machine learning interoperability

pyopencl - OpenCL integration for Python, plus shiny features

tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference

tf-quant-finance - High-performance TensorFlow library for quantitative finance.

iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.

arrayfire-python - Python bindings for ArrayFire: A general purpose GPU library.

burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.

scikit-cuda - Python interface to GPU-powered libraries

blaze - A Rustified OpenCL Experience

BlendLuxCore - Blender Integration for LuxCore