Our great sponsors
-
stable-diffusion-webui
Discontinued Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui] (by sd-webui)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
stable-diffusion
Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)
-
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
-
Linux-StableDiffusion-Script
Discontinued A simple script to automate the installation and running of the hlky Stable Diffusion fork for Linux users. Please see my guide for running this on Linux: https://rentry.org/linux-sd
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Nice to know tricks for /sd-webui/
- activate advanced: create prompt matrix and use
@a painting of a (forest|desert|swamp|island|plains) painted by (claude monet|greg rutkowski|thomas kinkade)
- add different relative weights for words in a prompt:
watercolor :0.5 painting :0.2 by picasso :0.3
- Generate much larger images with your limited vram by using optimized versions of attention.py and model.py
https://github.com/sd-webui/stable-diffusion-webui/discussio...
- Generate "Loab the AI haunting woman" if you can (Try using textual inversion with negatively weighted prompts)
https://www.cnet.com/science/what-is-loab-the-haunting-ai-ar...
I recently switched from a CPU-only version to this repo release 1.13: https://github.com/lstein/stable-diffusion
The original txt2img and img2img scripts are a bit wonky and not all of the samplers work, but as long as you stick to dream.py and use a working sampler, I have had good luck with k_lms, then it works great and runs way faster than the cpu version.
Works great on 32gb ram but I'm honestly tempted to sell this one and get a 64gb model once the m2 pros come around. This is capable of eating up all the ram you can throw at it to do multiple pictures simultaneously.
Diffusers shows how to use the fp16 variant.
https://github.com/huggingface/diffusers
There's some trickery and some details, but yes: https://github.com/NVIDIA/nvidia-docker
Some people have been able to run on recent AMD cards with rocm: https://github.com/CompVis/stable-diffusion/issues/48
Related posts
- Plex setup through Docker + Nvidia card, but hardware acceleration stops working after some time
- Seeking Guidance on Leveraging Local Models and Optimizing GPU Utilization in containerized packages
- Which GPU for HW transcoding in PMS: Intel Arc or Nvidia?
- [D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
- Help! Accelerated-GPU with Cuda and CuPy