stable-diffusion-webui-two-shot
stable-diffusion-webui-two-shot | ultimate-upscale-for-automatic1111 | |
---|---|---|
29 | 52 | |
412 | 1,494 | |
- | - | |
3.7 | 4.2 | |
6 months ago | about 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-two-shot
-
How do I create widescreen images (21:9) and tell SD to paint the person in the middle?
I tried Regional Prompter and Latent Couple (https://github.com/ashen-sensored/stable-diffusion-webui-two-shot) extension but they don't seem to work properly (latter has awful documentation/examples).
-
Consistent environment setup for multiple scenes
Option 5 is to buy a smallish GPU farm and simply rely on good specific and regional prompting pushed through brute forced generations to extract similar looking places out of the thousands of hallucinations. Some loras, checkpoints, regional prompting with the Latent Couple extension in A1111, and an abundant abuse of ControlNet could also help.
- “Elon Musk and Mark Zuckerberg in a cage fight.” (SDXL 0.9)
-
Multi-diffusion with LORAs?
Use Latent Couple with Composable LoRA instead
-
Why can't it generate people separately? It always seems to combine them. How do I fix this? In this case it is Dwayne Johnson and Kevin Hart.
The solution to this is to use the 'Latent Couple' plugin.
-
[Frostveil Series] Up the mountain trail...
All three used the same prompt, which requires the Latent Couple extension.
-
MultiDiffusion Region Control plugin for A1111 not installing
git clone -b feature/mask_selection https://github.com/ashen-sensored/stable-diffusion-webui-two-shot.git
-
A Simple Comparison of 4 Latest Image Upscaling Strategy in Stable Diffusion WebUI
There are some extensions that break things even when they're disabled. If you're using Latent Couple (two shot), uninstall / delete the folder and and use this fork: https://github.com/ashen-sensored/stable-diffusion-webui-two-shot
-
Frostveil, the Nordic realm
I used the Latent Couple extension with the following mask:
-
How do I describe an object without those properties being applied to a different part of my image.
I was thinking about this extension.
ultimate-upscale-for-automatic1111
-
Ultimate Upscale for A1111 BUG
So I have this problem while trying to use "Ultimate Upscale for automatic1111" plugin in A1111, but I cannot find any information about it on any github issue, reddit post or other help platforms.
-
Mass generate images?
If you don't already have it, install the Ultimate SD Upscale script. It's in the Automatic1111 available extensions list, or you can install it from the URL. It gives you the option to choose from whatever upscalers you have installed.
- Adventure Girl
- Can't use the SD Upscale script... errors every time. Suggestions?
-
I love the Tile ControlNet, but it's really easy to overdo. Look at this monstrosity of tiny detail I made by accident.
Basically you'd select the "tile" Controlnet (both preprocessor and Controlnet model), and then you'd use either tiled diffusion or ultimate SD upscaler to create a tile upscale.
- Upscale photos and artwork in A1111
- ControlNet Reference-Only problems
-
Animals as Aztec/Mayan warriors
These were made over multiple iterations, but the workflow basically was - Generate seed image using either lyriel, edge of realism, absolute reality, rpgv4 - One I get an actual animal, use in control net reference_only, regenerate with photography prompt - Upscale with ultimate scale https://github.com/Coyote-A/ultimate-upscale-for-automatic1111
- Some 4k wallpaper i made while trying out a few extensions
- Controlnet : Reference only test
What are some alternatives?
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
MultiDiffusion - Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
sd-webui-regional-prompter - set prompt to divided region
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
adetailer - Auto detecting, masking and inpainting with detection model.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
sd-webui-stablesr - StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
stable-diffusion-webui - Stable Diffusion web UI