XNNPACK
wasmblr
XNNPACK | wasmblr | |
---|---|---|
8 | 5 | |
1,700 | 159 | |
1.6% | - | |
9.9 | 1.8 | |
5 days ago | almost 2 years ago | |
C | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
XNNPACK
- Xnnpack: High-efficiency floating-point neural network inference operators
- Can a NPU be used for vectors?
-
Performance critical ML: How viable is Rust as an alternative to C++
Why are you writing your own inference code in C++ or Rust instead of using some kind of established framework like XNNPACK?
- [P] Pure C/C++ port of OpenAI's Whisper
-
[Discussion] Is XNNPACK a part of mediapipe? or should be additionally configured with mediapipe?
XNNPACK - https://github.com/google/XNNPACK
- WebAssembly Techniques to Speed Up Matrix Multiplication by 120x
-
Prediction: Macs won't see many new games, no matter how powerful their hardware is
Ok, concrete example time! At work, we're going to be using some software which includes XNNPACK, which is a library of highly-optimised operations for doing neural-network inference. This is the sort of thing where people have gone in and specifically tuned for performance, and nope, there's no attempt at all made to have code which is different for Intel/AMD or Apple/Other ARM. What they target is elements of the ISA, like NEON (i.e. ARM SIMD) and SSE, AVX etc. on x86(-64). And Wasm SIMD for Wasm.
-
Where are Nvidia's DLSS models stored and how big are they?
It's quite simple. https://github.com/google/XNNPACK for example.
wasmblr
- Wasmblr – C++ WebAssembly Assembler in a single header file
-
WebAssembly Techniques to Speed Up Matrix Multiplication by 120x
That's a good point: you certainly could. There's some fun exploration to be done with atomic operations.
The issue is that threaded execution requires cross-origin isolation, which isn't trivial to integrate. (Example server that will serve the required headers: https://github.com/bwasti/wasmblr/blob/main/thread_example/s...)
- Wasmblr: A single header file WebAssembly assembler for C++
- GitHub - wasmblr: C++ WebAssembly assembler
- Show HN: A C++ Web Assembly assembler
What are some alternatives?
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
gemm-benchmark - Simple [sd]gemm benchmark, similar to ACES dgemm
cpuid2cpuflags - Tool to generate CPU_FLAGS_* for your CPU
Genann - simple neural network library in ANSI C
ruby-fann - Ruby library for interfacing with FANN (Fast Artificial Neural Network)
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
doesitarm - 🦾 A list of reported app support for Apple Silicon as well as Apple M2 and M1 Ultra Macs
Awesome-Rust-MachineLearning - This repository is a list of machine learning libraries written in Rust. It's a compilation of GitHub repositories, blogs, books, movies, discussions, papers, etc. 🦀