stable-diffusion-webui-two-shot
sd-webui-regional-prompter
stable-diffusion-webui-two-shot | sd-webui-regional-prompter | |
---|---|---|
29 | 60 | |
412 | 1,382 | |
- | - | |
3.7 | 8.5 | |
6 months ago | 29 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-two-shot
-
How do I create widescreen images (21:9) and tell SD to paint the person in the middle?
I tried Regional Prompter and Latent Couple (https://github.com/ashen-sensored/stable-diffusion-webui-two-shot) extension but they don't seem to work properly (latter has awful documentation/examples).
-
Consistent environment setup for multiple scenes
Option 5 is to buy a smallish GPU farm and simply rely on good specific and regional prompting pushed through brute forced generations to extract similar looking places out of the thousands of hallucinations. Some loras, checkpoints, regional prompting with the Latent Couple extension in A1111, and an abundant abuse of ControlNet could also help.
- “Elon Musk and Mark Zuckerberg in a cage fight.” (SDXL 0.9)
-
Multi-diffusion with LORAs?
Use Latent Couple with Composable LoRA instead
-
Why can't it generate people separately? It always seems to combine them. How do I fix this? In this case it is Dwayne Johnson and Kevin Hart.
The solution to this is to use the 'Latent Couple' plugin.
-
[Frostveil Series] Up the mountain trail...
All three used the same prompt, which requires the Latent Couple extension.
-
MultiDiffusion Region Control plugin for A1111 not installing
git clone -b feature/mask_selection https://github.com/ashen-sensored/stable-diffusion-webui-two-shot.git
-
A Simple Comparison of 4 Latest Image Upscaling Strategy in Stable Diffusion WebUI
There are some extensions that break things even when they're disabled. If you're using Latent Couple (two shot), uninstall / delete the folder and and use this fork: https://github.com/ashen-sensored/stable-diffusion-webui-two-shot
-
Frostveil, the Nordic realm
I used the Latent Couple extension with the following mask:
-
How do I describe an object without those properties being applied to a different part of my image.
I was thinking about this extension.
sd-webui-regional-prompter
-
Regional Prompting doesn't seem to be working a lot of the time
So I'm using the Regional Prompter extension https://github.com/hako-mikan/sd-webui-regional-prompter
- Dalle-3 Examples
- Stable Diffussion 1.5 Newbie Question about creating an image with 2 characters
-
"In summary, Stable Diffusion doesn’t really care about commas. But you can use them to organize your prompts for your own orderliness." (Link to quote below.) So... Is there a way to make SD care? To make it "understand" which words we put together to create meaning?
But using Automatic1111, this extension can define a region of the image where the prompt should apply: https://github.com/hako-mikan/sd-webui-regional-prompter
- Train SD for CAPTION WRITING? I'm tired of uploading hairstyle pics and got "male public hair"
- How to fix issue related to generate two guys when aspect ration isn't square?
-
A little bit of party after fighting each other in Smash bros (Text2img, controlnet, regional prompter, adetailer)
Second, install regional prompter and adetailer to automatic1111 webui.Next, go to setting>adetailer and change the "sort bounding boxes" from "none" to "left and right". This means that adetailer will inpaint our subjects starting from the very left to the right, allowing for greater control of what we want.
- What are some must-have/fun extensions or modules?
-
How to control a scene?
You can use ControlNets to control composition in various ways. You can use extensions like multidiffusion upscaler and regional prompter to control the layout of a scene. You can also inpaint details into a scene with the arrangement you want.
- Is there a way to guarantee one model in the image?
What are some alternatives?
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
MultiDiffusion - Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
sd-webui-stablesr - StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
adetailer - Auto detecting, masking and inpainting with detection model.