Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
sd-webui-colab
Discontinued A repo for the maintenance of the Colab version of stable-diffusion-webui repo
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I didn't realize 512x512 on 4GB VRAM (Win10 over RDP) was anything unusual, just followed https://github.com/awesome-stable-diffusion/awesome-stable-d... to https://github.com/basujindal/stable-diffusion and followed the instructions.
Use halfprecision float and/or the optimized forks
https://github.com/basujindal/stable-diffusion
https://github.com/neonsecret/stable-diffusion
Works on 8GB with tensor cores in ~ 15 seconds at default settings
If you're okay waiting a while linger and have plenty of RAM, https://github.com/bes-dev/stable_diffusion.openvino has a somewhat CPU-optimized version as well that relies on system memory rather than VRAM.
My laptop takes about 6 seconds per iteration so it's significantly slower, but if you're willing to wait I bet you'll have a much easier time plugging more RAM into your system than adding VRAM.
For those without GPU's / not a powerful enough one. You can start the hlky stable diffusion webui (yes, web ui) in Google Colab with this notebook[0].
It's simple and it works, using colab for processing but actually giving you a URL (ngrok-style) to open the pretty web ui in your browser.
I've been using that on-the-go when not at my PC and it's been working very well for me (after trying numerous other colab-dedicated repos, trying to fix them, and failing).
[0]: https://github.com/altryne/sd-webui-colab
The Linux/not-Windows instructions on https://github.com/hlky/stable-diffusion/wiki/Docker-Guide worked well for me using WSL2 with nvidia-docker
Use the original SD repo. But modify the txt2img.py according to:
https://github.com/CompVis/stable-diffusion/issues/86#issuec...
I had good luck with these directions, which let you run inside a docker container:
https://github.com/AshleyYakeley/stable-diffusion-rocm
I had to make the one line change suggested in issue #3 to get it to run under 8GB.
radeontop suggests 4GB might work.
I also had to add this environment variable to make it work on my unsupported radeon 6600xt:
HSA_OVERRIDE_GFX_VERSION=10.3.0
It takes under two minutes per batch of 5 images with the --turbo option.
(Base OS is manjaro; using the distro's version of docker; not the flatpack docker package.)
If you don't have a GPU, paperspace will rent you an appropriate VM.
I didn't realize 512x512 on 4GB VRAM (Win10 over RDP) was anything unusual, just followed https://github.com/awesome-stable-diffusion/awesome-stable-d... to https://github.com/basujindal/stable-diffusion and followed the instructions.
I didn't realize 512x512 on 4GB VRAM (Win10 over RDP) was anything unusual, just followed https://github.com/awesome-stable-diffusion/awesome-stable-d... to https://github.com/basujindal/stable-diffusion and followed the instructions.
Related posts
- Other platform similar to civitai?
- The Count of Castlevania
- New to ai generating images - are images in this subreddit from downloaded versions of Diffusion sites?
- List of Stable Diffusion systems - Part 4
- Awesome Stable Diffusion - Community-maintained list of software and resources for Stable Diffusion