stable-diffusion-webui-two-shot
sd-webui-cutoff
stable-diffusion-webui-two-shot | sd-webui-cutoff | |
---|---|---|
29 | 45 | |
412 | 1,163 | |
- | - | |
3.7 | 5.9 | |
6 months ago | 6 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-two-shot
-
How do I create widescreen images (21:9) and tell SD to paint the person in the middle?
I tried Regional Prompter and Latent Couple (https://github.com/ashen-sensored/stable-diffusion-webui-two-shot) extension but they don't seem to work properly (latter has awful documentation/examples).
-
Consistent environment setup for multiple scenes
Option 5 is to buy a smallish GPU farm and simply rely on good specific and regional prompting pushed through brute forced generations to extract similar looking places out of the thousands of hallucinations. Some loras, checkpoints, regional prompting with the Latent Couple extension in A1111, and an abundant abuse of ControlNet could also help.
- “Elon Musk and Mark Zuckerberg in a cage fight.” (SDXL 0.9)
-
Multi-diffusion with LORAs?
Use Latent Couple with Composable LoRA instead
-
Why can't it generate people separately? It always seems to combine them. How do I fix this? In this case it is Dwayne Johnson and Kevin Hart.
The solution to this is to use the 'Latent Couple' plugin.
-
[Frostveil Series] Up the mountain trail...
All three used the same prompt, which requires the Latent Couple extension.
-
MultiDiffusion Region Control plugin for A1111 not installing
git clone -b feature/mask_selection https://github.com/ashen-sensored/stable-diffusion-webui-two-shot.git
-
A Simple Comparison of 4 Latest Image Upscaling Strategy in Stable Diffusion WebUI
There are some extensions that break things even when they're disabled. If you're using Latent Couple (two shot), uninstall / delete the folder and and use this fork: https://github.com/ashen-sensored/stable-diffusion-webui-two-shot
-
Frostveil, the Nordic realm
I used the Latent Couple extension with the following mask:
-
How do I describe an object without those properties being applied to a different part of my image.
I was thinking about this extension.
sd-webui-cutoff
-
Strategies for avoiding keyword leakage
I've used the cutoff extension before to help limit prompt bleeding. It's not going to work 100% of the time though, but in my experience it does help make cleaner items more frequent. Personally, I don't use it much in my workflow, but it's nice to have for the times when I need it.
- Colors on the wrong stuff
-
Overwhelmed by new extension list in Web UI after updating to latest A1111
Cutoff - Prevents concepts like color from bleeding into other parts of prompt. For ex, you know how using "blue eyes" may also make clothing or hair blue? This mitigates that.
-
Multiple people question
There's an extension called Cutoff that's intended to help localize the influence of prompts. I've had rather equivocal success with it, so I can't at this time wholeheartedly endorse it, but it's certainly worth trying.
-
Ya'll are gonna get sick of me lol. I am trying to use Target Tokens....
https://github.com/hnmr293/sd-webui-cutoff (prevents color-contamination eg. eye-color leaking into hair color)
-
Controlnet reference+lineart model works so great!
The get the cut off extensions, enable that https://github.com/hnmr293/sd-webui-cutoff
-
How do I describe an object without those properties being applied to a different part of my image.
This extention might be what you're looking for.
- [Stable Diffusion] Toujours la même couleur de vêtements sur le personnage. C'est maintenant possible.
-
Is there a sd client or way to have the cpu take over when the gpu is out of memory?
The ones I use most often, in conjunction with ControlNet, are Regional Prompter, and Cutoff.
-
I keep getting Caucasian skin even though am I specifying obsidian skin or black... yellow eyes instead of lavender...
Then use Cutoff to reduce color contamination when you reference multiple colors(I recommend weight 2 with Cutoff Strongly disabled). https://github.com/hnmr293/sd-webui-cutoff
What are some alternatives?
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
a1111-sd-webui-tome - ToMe extension for Stable Diffusion A1111 WebUI
MultiDiffusion - Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
sd-webui-regional-prompter - set prompt to divided region
kohya_ss
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
a1111-sd-webui-tagcomplete - Booru style tag autocompletion for AUTOMATIC1111's Stable Diffusion web UI
sd-webui-stablesr - StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
adetailer - Auto detecting, masking and inpainting with detection model.