rocm-gfx803
stable-diffusion-cpu
rocm-gfx803 | stable-diffusion-cpu | |
---|---|---|
7 | 4 | |
167 | 68 | |
- | - | |
1.1 | 0.0 | |
about 1 year ago | about 1 year ago | |
Jupyter Notebook | ||
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rocm-gfx803
- ROCm gfx803 archlinux
-
My brother is giving away a PC he built with 8 AMD Radeon RX Vega x64 GPUs (8GB ram). I've only ever done ML on Nvidia cards. Is there anything I can do with these?
That specific card has current support for rocm and that is supported by at least tensorflow and torch, plus many other less known/used libraries like cupy, although you are correct in the fact that support sucks in the long run, I have a GPU that is known to be useful and that has continued COMMUNITY support because AMD cut the support with rocm 4.0, thanks to Xuhuisheng for the patch to make the rx580 work with current rocm despite AMD lack of support, what open source can accomplish https://github.com/xuhuisheng/rocm-gfx803
-
Automatic111 - Torch is not able to use GPU. Help!
You'll also need to compile pytorch and torchvision for gfx803, although I recommend you install the whl files from here inside your venv because it's a massive pain to compile them on non-Ubuntu (I tried)
-
Image Creation Time for each GPU.
I followed the guide from here: https://github.com/xuhuisheng/rocm-gfx803
-
I *think* it's impossible to run SD on an RX 570 (and probably below?)
There is an unofficial build of ROCm 5.2.0 + pytorch + torchvision with GFX8 support added back in. I have no idea if it works. Perhaps someone who knows Docker/Conda could get SD working with those files.
- Run Stable Diffusion on Intel CPUs
stable-diffusion-cpu
-
Pocket Dragons - Coffee with your pet Dragon
Created with stable-diffusion-cpu on an HP Proliant server, CPU-only, dual Xeon, 16c/32t.
-
Run Stable Diffusion on Intel CPUs
I found this repo early on and have been using it to run inference on my M1 Pro MBP. https://github.com/ModeratePrawn/stable-diffusion-cpu
For me it runs at about 3.5 seconds per iteration per picture at 512x512.
There is also a fork that uses metal here and is much faster: https://github.com/magnusviri/stable-diffusion/tree/apple-si...
-
StableDiffusion RUNS on M1 chips.
Download the code from the Github repo https://github.com/ModeratePrawn/stable-diffusion-cpu and unzip it. Open it on an editor (e.g. VS Code)
-
The Stable Diffusion wiki guide does not work for Mac, however the guide does not state this.
For CPU, try: https://github.com/ModeratePrawn/stable-diffusion-cpu
What are some alternatives?
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
stable-diffusion-amd
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
stable_diffusion.openvino
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
txt2imghd - A port of GOBIG for Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI