DeepSpeed-MII
XNNPACK
Our great sponsors
DeepSpeed-MII | XNNPACK | |
---|---|---|
6 | 8 | |
1,629 | 1,700 | |
7.0% | 2.5% | |
8.7 | 9.9 | |
6 days ago | 1 day ago | |
Python | C | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed-MII
- Stable Diffusion plus DeepSpeed
-
[D] When chatGPT stops being free: Run SOTA LLM in cloud
Microsoft/DeepSpeed-MII for an up 40x reduction on inference cost on Azure, this thing also supports int8 and fp16 bloom out of the box, but it fails on Azure due to instance size.
- Image Creation Time for each GPU.
-
Anyone tried DeepSpeed-MII with stablediffusion?
Haven't tried it yet but they have some example code here: https://github.com/microsoft/DeepSpeed-MII/blob/main/examples/local/txt2img-example.py
- [P] Pure C/C++ port of OpenAI's Whisper
XNNPACK
- Xnnpack: High-efficiency floating-point neural network inference operators
- Can a NPU be used for vectors?
-
Performance critical ML: How viable is Rust as an alternative to C++
Why are you writing your own inference code in C++ or Rust instead of using some kind of established framework like XNNPACK?
- [P] Pure C/C++ port of OpenAI's Whisper
-
[Discussion] Is XNNPACK a part of mediapipe? or should be additionally configured with mediapipe?
XNNPACK - https://github.com/google/XNNPACK
- WebAssembly Techniques to Speed Up Matrix Multiplication by 120x
-
Prediction: Macs won't see many new games, no matter how powerful their hardware is
Ok, concrete example time! At work, we're going to be using some software which includes XNNPACK, which is a library of highly-optimised operations for doing neural-network inference. This is the sort of thing where people have gone in and specifically tuned for performance, and nope, there's no attempt at all made to have code which is different for Intel/AMD or Apple/Other ARM. What they target is elements of the ISA, like NEON (i.e. ARM SIMD) and SSE, AVX etc. on x86(-64). And Wasm SIMD for Wasm.
-
Where are Nvidia's DLSS models stored and how big are they?
It's quite simple. https://github.com/google/XNNPACK for example.
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
gemm-benchmark - Simple [sd]gemm benchmark, similar to ACES dgemm
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
cpuid2cpuflags - Tool to generate CPU_FLAGS_* for your CPU
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
wasmblr - C++ WebAssembly assembler in a single header file
whisper-rs - Rust bindings to https://github.com/ggerganov/whisper.cpp
Genann - simple neural network library in ANSI C
rocm-gfx803
ruby-fann - Ruby library for interfacing with FANN (Fast Artificial Neural Network)