XNNPACK
r4cppp
XNNPACK | r4cppp | |
---|---|---|
8 | 10 | |
1,700 | 3,519 | |
1.6% | - | |
9.9 | 4.1 | |
6 days ago | 2 months ago | |
C | Rust | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
XNNPACK
- Xnnpack: High-efficiency floating-point neural network inference operators
- Can a NPU be used for vectors?
-
Performance critical ML: How viable is Rust as an alternative to C++
Why are you writing your own inference code in C++ or Rust instead of using some kind of established framework like XNNPACK?
- [P] Pure C/C++ port of OpenAI's Whisper
-
[Discussion] Is XNNPACK a part of mediapipe? or should be additionally configured with mediapipe?
XNNPACK - https://github.com/google/XNNPACK
- WebAssembly Techniques to Speed Up Matrix Multiplication by 120x
-
Prediction: Macs won't see many new games, no matter how powerful their hardware is
Ok, concrete example time! At work, we're going to be using some software which includes XNNPACK, which is a library of highly-optimised operations for doing neural-network inference. This is the sort of thing where people have gone in and specifically tuned for performance, and nope, there's no attempt at all made to have code which is different for Intel/AMD or Apple/Other ARM. What they target is elements of the ISA, like NEON (i.e. ARM SIMD) and SSE, AVX etc. on x86(-64). And Wasm SIMD for Wasm.
-
Where are Nvidia's DLSS models stored and how big are they?
It's quite simple. https://github.com/google/XNNPACK for example.
r4cppp
-
C programmer
Rust for systems programmers
- Performance critical ML: How viable is Rust as an alternative to C++
-
C++ to Rust Books?
Yes, you can read r4cppp
-
Implementing a pointer based mechanism for a half edge
Is this article useful?
-
Topics you'd like to see more tutorials on?
Hmm, representing graphs in Rust is somewhat complex (here's a tutorial). Might be worth writing stuff about.
-
A tour of Rust <-> C++ interoperability
There's also a rust for the C++ programmer, but I'm not sure it is really very complete at this point.
-
Serious question, can i a c++ intermediate, learn rust in time to help with development?
Others already provided excellent answers, but I'd still like to share this tutorial, if you're already familiar with C++ - Rust for C++ Programmers on GitHub.
-
EnTT v3.7.0 is out: Gaming meets Modern C++
Apart from the official book, I find Rust For Systems Programmers a nice introduction.
-
Where to go to learn Rust in 2021
I found r4cppp[1] much more useful than any other resource to learn Rust. I understand not everyone has experience with C++, but I found the fact other resources spent a lot of time on topics that are alrewdy intuitive to systems programmers to be very frustrating.
[1] https://github.com/nrc/r4cppp
What are some alternatives?
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
nomicon - The Dark Arts of Advanced and Unsafe Rust Programming
gemm-benchmark - Simple [sd]gemm benchmark, similar to ACES dgemm
entt - Gaming meets modern C++ - a fast and reliable entity component system (ECS) and much more
cpuid2cpuflags - Tool to generate CPU_FLAGS_* for your CPU
wasmblr - C++ WebAssembly assembler in a single header file
Genann - simple neural network library in ANSI C
ruby-fann - Ruby library for interfacing with FANN (Fast Artificial Neural Network)
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
whisper.cpp - Port of OpenAI's Whisper model in C/C++