multi-subject-render
Lora-for-Diffusers
multi-subject-render | Lora-for-Diffusers | |
---|---|---|
18 | 2 | |
359 | 707 | |
- | - | |
2.5 | 1.0 | |
about 1 year ago | about 1 month ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multi-subject-render
-
Creating pictures of multiple people with distinct faces
You can use the multi subject renderer https://github.com/Extraltodeus/multi-subject-render.git
-
Can I use SD to generate group pictures (of say, me and my cousin, or me and multiple cousins)?
Get this Extension, and as always, please read the docs to avoid problems.
-
Find it hard to tune my prompt for more than 2 characters
There's also a script/extension https://github.com/Extraltodeus/multi-subject-render but it's fiddily to get work right, and i think the other workflow is faster.
- Textual Inversion: TI TLDR for the Lazy. How to Make Fake People: Simple TI Traning Using 6 Images and very low Settings. Bonus 1: How to Make Fake People that Look Like Anything you Want. Bonus 2: Why 1980s Nightcrawler dont care about your prompts. With Unedited Image Samples.
-
How to do Multiple chars in 1 image
There are some ideas to create multiple different subjects, such as this extension for automatic (https://github.com/Extraltodeus/multi-subject-render), or Area Composition if you are using ComfyUI (https://comfyanonymous.github.io/ComfyUI_examples/area_composition/).
- How to detail 2 objects, each with its own qualities in prompt?
- Ladies in sexy pajamas
- Uhhhhh
-
Tips for creating picture with multiple characters?
you can do it with https://github.com/Extraltodeus/multi-subject-render but i don't really know how to use it
-
What are you struggling to do?
There is an extension called multi-subject-render that allows you to provide one prompt for the background and a second prompt for the foreground.
Lora-for-Diffusers
-
Question for LoRA pros
Check this out: https://github.com/haofanwang/Lora-for-Diffusers I assume you created your .safetensors LoRA model with GUI. You can't directly use safetensors like "pipe.unet.load_attn_procs(model_path) " (at least for now). Just check the link
-
LoRA? ControlNet? T2I-Adapter? All in Diffusers now!
(1) [LoRA for diffusers]: https://github.com/haofanwang/Lora-for-Diffusers
What are some alternatives?
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
batch-face-swap - Automaticaly detects faces and replaces them
stable-diffusion-webui-distributed - Chains stable-diffusion-webui instances together to facilitate faster image generation.
stable-diffusion-webui-stable-horde - Stable Horde client for AUTOMATIC1111's Stable Diffusion Web UI
depthmap2mask - Create masks out of depthmaps in img2img
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
sdweb-merge-board - Multi-step automation merge tool. Extension/Script for Stable Diffusion UI by AUTOMATIC1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui
ControlNet-for-Diffusers - Transfer the ControlNet with any basemodel in diffusers🔥
sd-webui-reactor-force - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111, SD.Next, Cagliostro) with NVIDIA GPU Support
T2I-Adapter-for-Diffusers - Transfer the T2I-Adapter with any basemodel in diffusers🔥
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
stable-diffusion-webui-daam - DAAM for Stable Diffusion Web UI