multi-subject-render
stable-diffusion-webui-composable-lora
multi-subject-render | stable-diffusion-webui-composable-lora | |
---|---|---|
18 | 2 | |
359 | 153 | |
- | - | |
2.5 | 6.7 | |
about 1 year ago | 5 months ago | |
Python | Python | |
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multi-subject-render
-
Creating pictures of multiple people with distinct faces
You can use the multi subject renderer https://github.com/Extraltodeus/multi-subject-render.git
-
Can I use SD to generate group pictures (of say, me and my cousin, or me and multiple cousins)?
Get this Extension, and as always, please read the docs to avoid problems.
-
Find it hard to tune my prompt for more than 2 characters
There's also a script/extension https://github.com/Extraltodeus/multi-subject-render but it's fiddily to get work right, and i think the other workflow is faster.
- Textual Inversion: TI TLDR for the Lazy. How to Make Fake People: Simple TI Traning Using 6 Images and very low Settings. Bonus 1: How to Make Fake People that Look Like Anything you Want. Bonus 2: Why 1980s Nightcrawler dont care about your prompts. With Unedited Image Samples.
-
How to do Multiple chars in 1 image
There are some ideas to create multiple different subjects, such as this extension for automatic (https://github.com/Extraltodeus/multi-subject-render), or Area Composition if you are using ComfyUI (https://comfyanonymous.github.io/ComfyUI_examples/area_composition/).
- How to detail 2 objects, each with its own qualities in prompt?
- Ladies in sexy pajamas
- Uhhhhh
-
Tips for creating picture with multiple characters?
you can do it with https://github.com/Extraltodeus/multi-subject-render but i don't really know how to use it
-
What are you struggling to do?
There is an extension called multi-subject-render that allows you to provide one prompt for the background and a second prompt for the foreground.
stable-diffusion-webui-composable-lora
-
Multi-diffusion with LORAs?
Use Latent Couple with Composable LoRA instead
-
Any hope of Composable Lora getting fixed?
There's a new updated fork here - https://github.com/a2569875/stable-diffusion-webui-composable-lora
What are some alternatives?
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
sd-webui-stablesr - StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
stable-diffusion-webui-distributed - Chains stable-diffusion-webui instances together to facilitate faster image generation.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
depthmap2mask - Create masks out of depthmaps in img2img
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
sdweb-merge-board - Multi-step automation merge tool. Extension/Script for Stable Diffusion UI by AUTOMATIC1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
sd-webui-reactor-force - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111, SD.Next, Cagliostro) with NVIDIA GPU Support
sd-webui-segment-everything - Segment Anything for Stable Diffusion Webui [Moved to: https://github.com/continue-revolution/sd-webui-segment-anything]
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
batch-face-swap - Automaticaly detects faces and replaces them