AITemplate
rocm-gfx803
Our great sponsors
AITemplate | rocm-gfx803 | |
---|---|---|
37 | 7 | |
4,455 | 167 | |
1.3% | - | |
8.7 | 1.1 | |
about 21 hours ago | about 1 year ago | |
Python | ||
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AITemplate
-
Show HN: Shortbread, a web app that helps you create AI comics in minutes
VoltaML is a relatively vanilla diffusers-based backend, so its not a hairy monster to hack like you may have seen with SAI-based UIs.
The AITTemplate code is a lightly modified version of Facebook's example, code, to get rid of small issues like VRAM spikes: https://github.com/facebookincubator/AITemplate/tree/main/ex...
InvokeAI is also diffusers based, but they seem to mess with the pipeline a bit more.
And anyway, all that may be a better reference for interesting features rather than a backend to try and adopt.
-
List of all the ways to improve performance for stable diffusion.
let me know if you discover any more ways to improve SD. I am currently looking into facebooks AITemplate : https://github.com/facebookincubator/AITemplate
- [R] AITemplate Python to AMD compiler {META}
-
Nearly 2x speedup for SD rendering using AITemplate
Link to AITemplate itself: https://github.com/facebookincubator/AITemplate
- Render a neural network into CUDA/HIP code
- Render neural network into CUDA/HIP code
- AITemplate: a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
- A1111 vs Olive vs AITemplate.
rocm-gfx803
- ROCm gfx803 archlinux
-
My brother is giving away a PC he built with 8 AMD Radeon RX Vega x64 GPUs (8GB ram). I've only ever done ML on Nvidia cards. Is there anything I can do with these?
That specific card has current support for rocm and that is supported by at least tensorflow and torch, plus many other less known/used libraries like cupy, although you are correct in the fact that support sucks in the long run, I have a GPU that is known to be useful and that has continued COMMUNITY support because AMD cut the support with rocm 4.0, thanks to Xuhuisheng for the patch to make the rx580 work with current rocm despite AMD lack of support, what open source can accomplish https://github.com/xuhuisheng/rocm-gfx803
-
Automatic111 - Torch is not able to use GPU. Help!
You'll also need to compile pytorch and torchvision for gfx803, although I recommend you install the whl files from here inside your venv because it's a massive pain to compile them on non-Ubuntu (I tried)
-
Image Creation Time for each GPU.
I followed the guide from here: https://github.com/xuhuisheng/rocm-gfx803
-
I *think* it's impossible to run SD on an RX 570 (and probably below?)
There is an unofficial build of ROCm 5.2.0 + pytorch + torchvision with GFX8 support added back in. I have no idea if it works. Perhaps someone who knows Docker/Conda could get SD working with those files.
- Run Stable Diffusion on Intel CPUs
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
nebuly - The user analytics platform for LLMs
stable-diffusion-cpu
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
voltaML - ⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM.
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
stable-diffusion-tensorflow - Stable Diffusion in TensorFlow / Keras
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
stable_diffusion.openvino