stable-diffusion
stable-diffusion-rocm
stable-diffusion | stable-diffusion-rocm | |
---|---|---|
5 | 5 | |
94 | 57 | |
- | - | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Jupyter Notebook | Dockerfile | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
-
Fixing excessive contrast/saturation resulting from high CFG scales
I'm using a modified noise schedule (Karras et al, arXiv:2206.00364) taken from the LAION Discord user's fork (here). With that schedule, from their testing and my own, k_heun seems to perform about 3x better than others at equivalent steps (each step takes about 2x longer, but it's still a win). Also it performs well even with as low as 7 steps. I'd be surprised if euler was far superior since from my understanding, heun is basically an improved version of it.
- Run Stable Diffusion on Your M1 Mac’s GPU
- The 'dummies' are craving an even 'dummier' tutorial (please)
stable-diffusion-rocm
-
[D] About the current state of ROCm
Re: stable diffusion https://github.com/AshleyYakeley/stable-diffusion-rocm
-
It's time to upscale FSR 2 even further: Meet FSR 2.1
Very easy actually. This is not officially documented, but with a recent enough kernel you don't have to install anything. You can grab the official rocm container and it'll just work. For example for Stable Diffusion see https://github.com/AshleyYakeley/stable-diffusion-rocm/blob/...
-
Running Stable Diffusion on Your GPU with Less Than 10Gb of VRAM
I had good luck with these directions, which let you run inside a docker container:
https://github.com/AshleyYakeley/stable-diffusion-rocm
I had to make the one line change suggested in issue #3 to get it to run under 8GB.
radeontop suggests 4GB might work.
I also had to add this environment variable to make it work on my unsupported radeon 6600xt:
HSA_OVERRIDE_GFX_VERSION=10.3.0
It takes under two minutes per batch of 5 images with the --turbo option.
(Base OS is manjaro; using the distro's version of docker; not the flatpack docker package.)
If you don't have a GPU, paperspace will rent you an appropriate VM.
-
Run Stable Diffusion on Your M1 Mac’s GPU
I have it working on an RX 6800, used the scripts from this repo[0] to build a docker image that has ROCm drivers and PyTorch installed.
I'm running Ubuntu 22.04 LTS as the host OS, didn't have to touch anything beyond the basic Docker install. Next step is build a new Dockerfile that adds in the Stable Diffusion WebUI.[1]
[0] https://github.com/AshleyYakeley/stable-diffusion-rocm
- Dockerfile for easy use on an AMD GPU
What are some alternatives?
invisible-watermark - python library for invisible image watermark (blind image watermark)
stable-diffusion
stable_diffusion.openvino
stable-diffusion-intel-mac
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
stable-diffusion
3d-ken-burns - an implementation of 3D Ken Burns Effect from a Single Image using PyTorch
gradi
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
onnx - Open standard for machine learning interoperability