rocm-gfx803
rocm-gfx803 | stable_diffusion.openvino | |
---|---|---|
7 | 47 | |
167 | 1,528 | |
- | - | |
1.1 | 0.8 | |
about 1 year ago | 8 months ago | |
Python | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rocm-gfx803
- ROCm gfx803 archlinux
-
My brother is giving away a PC he built with 8 AMD Radeon RX Vega x64 GPUs (8GB ram). I've only ever done ML on Nvidia cards. Is there anything I can do with these?
That specific card has current support for rocm and that is supported by at least tensorflow and torch, plus many other less known/used libraries like cupy, although you are correct in the fact that support sucks in the long run, I have a GPU that is known to be useful and that has continued COMMUNITY support because AMD cut the support with rocm 4.0, thanks to Xuhuisheng for the patch to make the rx580 work with current rocm despite AMD lack of support, what open source can accomplish https://github.com/xuhuisheng/rocm-gfx803
-
Automatic111 - Torch is not able to use GPU. Help!
You'll also need to compile pytorch and torchvision for gfx803, although I recommend you install the whl files from here inside your venv because it's a massive pain to compile them on non-Ubuntu (I tried)
-
Image Creation Time for each GPU.
I followed the guide from here: https://github.com/xuhuisheng/rocm-gfx803
-
I *think* it's impossible to run SD on an RX 570 (and probably below?)
There is an unofficial build of ROCm 5.2.0 + pytorch + torchvision with GFX8 support added back in. I have no idea if it works. Perhaps someone who knows Docker/Conda could get SD working with those files.
- Run Stable Diffusion on Intel CPUs
stable_diffusion.openvino
- FLaNK Stack 05 Feb 2024
-
Installing A1111 Stable Diffusion Error
it might be the --xformers flag, try getting rid of that since your not using cuda you wouldn't be able to run it with xformers and you could also try --use-cpu all ... you can also check this out .. https://github.com/bes-dev/stable_diffusion.openvino .. it's probably your best option if your using CPU, which if your PC Graphics are using Intel UHD 620 then you don't have a GPU and an optimized CPU inference would be best to run
- 4 Reasons to Switch to Intel Arc GPUs
-
why is SD not actually using the GPU?
SD can be run on a CPU without a GPU. I know for certain it can be done with OpenVINO. In fact, on some i7s, it will run at around 3 seconds per iteration. There was a reddit SD thread a while back saying it can be done with Automatic111. Also, soe recent threads on problems with AMD GPUs suggest Automatic1111 is using the CPU rather than the intended GPU. (Fortuanely, I have a GPU, so I don't have to deal with it myself!)
-
Slow Performance on RX 6800 XT; Am I Doing Something Wrong or is ROCm Just this Slow?
I'm not actually entirely convinced that it's even using the GPU. Radeontop shows 0% utilization while the images are generating. Additionally, the listed iteration speed should be impossibly slow for any GPU; it says 26.58s/it, which is slower than just running on a CPU.
-
How can i fix it?
iGPU's are in short not supported. There's this repo that may or may not help you, but even if it did I wouldn't expect much.
-
Stable Diffusion Web UI for Intel Arc
You can also run it in windows native with openvino, there is a barebones webui for it as well in one of the forks.Requires setting cpu to gpu in one the files. https://github.com/bes-dev/stable_diffusion.openvino
-
Intel Arc A770 is underperforming in Tom's Hardware Review
In https://github.com/bes-dev/stable_diffusion.openvino/blob/master/stable_diffusion_engine.py
-
So a new benchmark was done for Stable Diffusion on GPU's
" We ended up using three different Stable Diffusion projects for our testing, mostly because no single package worked on every GPU. For Nvidia, we opted for Automatic 1111's webui version(opens in new tab). AMD GPUs were tested using Nod.ai's Shark version(opens in new tab), while for Intel's Arc GPUs we used Stable Diffusion OpenVINO(opens in new tab). "
- Anyone here using Mac?
What are some alternatives?
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
stable-diffusion
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stable-diffusion-cpu
stable-diffusion
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
stable-diffusion-rocm
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
stable-diffusion - A latent text-to-image diffusion model