DeepSpeed-MII
HIP-CPU
Our great sponsors
DeepSpeed-MII | HIP-CPU | |
---|---|---|
6 | 5 | |
1,629 | 104 | |
7.0% | 5.8% | |
8.7 | 7.2 | |
6 days ago | about 1 month ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed-MII
- Stable Diffusion plus DeepSpeed
-
[D] When chatGPT stops being free: Run SOTA LLM in cloud
Microsoft/DeepSpeed-MII for an up 40x reduction on inference cost on Azure, this thing also supports int8 and fp16 bloom out of the box, but it fails on Azure due to instance size.
- Image Creation Time for each GPU.
-
Anyone tried DeepSpeed-MII with stablediffusion?
Haven't tried it yet but they have some example code here: https://github.com/microsoft/DeepSpeed-MII/blob/main/examples/local/txt2img-example.py
- [P] Pure C/C++ port of OpenAI's Whisper
HIP-CPU
- HIP CPU
- [P] Pure C/C++ port of OpenAI's Whisper
-
AMD publishes GPUFORT as Open Source to address CUDA’s dominance
If I'm reading this right, this is Fortran's equivalent of HIP, i.e. a way to (semi-)automatically convert CUDA-based solution to a more backend-independent one so that the same source can be run both on CUDA and ROCm GPUs (and potentially more; e.g. they also have an experimental CPU backend).
-
Test Coverage with CUDA
So, I know that you asked about cuda, but this might actually be possible in hip, and you can convert your code to hip relatively easily. The path would be to use the CPU implementation (https://github.com/ROCm-Developer-Tools/HIP-CPU) and then run your code coverage on that.
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
libcudacxx - [ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
rocFFT - Next generation FFT implementation for ROCm
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
whisper-rs - Rust bindings to https://github.com/ggerganov/whisper.cpp
stdgpu - stdgpu: Efficient STL-like Data Structures on the GPU
XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web