wasmblr VS gemm-benchmark

Compare wasmblr vs gemm-benchmark and see what are their differences.

wasmblr

C++ WebAssembly assembler in a single header file (by bwasti)

gemm-benchmark

Simple [sd]gemm benchmark, similar to ACES dgemm (by danieldk)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
wasmblr gemm-benchmark
5 6
159 8
- -
1.8 3.5
about 2 years ago 6 months ago
C++ Rust
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

wasmblr

Posts with mentions or reviews of wasmblr. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-25.

gemm-benchmark

Posts with mentions or reviews of gemm-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-20.
  • Running Stable Diffusion in 260MB of RAM
    3 projects | news.ycombinator.com | 20 Jul 2023
    And PyTorch on the M1 (without Metal) uses the fast AMX matrix multiplication units (through the Accelerate Framework). The matrix multiplication on the M1 is on par with ~10 threads/cores of Ryzen 5900X.

    [1] https://github.com/danieldk/gemm-benchmark#example-results

  • Ask HN: What is a AI chip and how does it work?
    4 projects | news.ycombinator.com | 27 May 2023
    Apple Silicon Macs have special matrix multiplication units (AMX) that can do matrix multiplication fast and with low energy requirements [1]. These AMX units can often beat matrix multiplication on AMD/Intel CPUs (especially those without a very large number of cores). Since a lot of linear algebra code uses matrix multiplication and using the AMX units is only a matter of linking against Accelerate (for its BLAS interface), a lot of software that uses BLAS is faster o Apple Silicon Macs.

    That said, the GPUs in your M1 Mac are faster than the AMX units and any reasonably modern NVIDIA GPU will wipe the floor with the AMX units or Apple Silicon GPUs in raw compute. However, a lot of software does not use CUDA by default and for small problem sets AMX units or CPUs with just AVX can be faster because they don't incur the cost of data transfers from main memory to GPU memory and vice versa.

    [1] Benchmarks:

    https://github.com/danieldk/gemm-benchmark#example-results

    https://explosion.ai/blog/metal-performance-shaders (scroll down a bit for AMX and MPS numbers)

  • Apple previews Live Speech, Personal Voice, and more new accessibility features
    3 projects | news.ycombinator.com | 16 May 2023
  • How to Get 1.5 TFlops of FP32 Performance on a Single M1 CPU Core
    1 project | news.ycombinator.com | 5 Jan 2023
    Yes, there is one per core cluster. The title is a bit misleading, because it suggests that going to two or three cores will scale linearly, though it won't be much faster. See here for sgemm benchmarks for everything from the M1 to M1 Ultra and 1 to 16 threads:

    https://github.com/danieldk/gemm-benchmark#1-to-16-threads

  • WebAssembly Techniques to Speed Up Matrix Multiplication by 120x
    4 projects | news.ycombinator.com | 25 Jan 2022
    There's always been a tradeoff in writing code between developer experience and taking full advantage of what the hardware is capable of. That "waste" in execution efficiency is often worth it for the sake of representing helpful abstractions and generally helping developer productivity.

    The GFLOP/s is 1/28th of what you'd get when using the native Accelerate framework on M1 Macs [1]. I am all in for powerful abstractions, but not using native APIs for this (even if it's just the browser calling Accelerate in some way) is just a huge waste of everyone's CPU cycles and electricity.

    [1] https://github.com/danieldk/gemm-benchmark#1-to-16-threads

What are some alternatives?

When comparing wasmblr and gemm-benchmark you can also consider the following projects:

XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web

rknn-toolkit

OnnxStream - Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.

armnn - Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

piper - A fast, local neural text to speech system

DOOM - DOOM Open Source Release

tensorflow - An Open Source Machine Learning Framework for Everyone