stable-diffusion-webui-composable-lora
paint-with-words-sd
stable-diffusion-webui-composable-lora | paint-with-words-sd | |
---|---|---|
16 | 13 | |
461 | 618 | |
- | - | |
4.2 | 5.2 | |
about 1 year ago | about 1 year ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-composable-lora
-
Discussion Thread
They do, it just needs more work. Stable Diffusion + Latent Couple + Composable LoRA let's you have an Elon model and a Mark model in different parts of the image. No Elon/Mark-ness dominating all the subjects.
-
ControlNet Reference-Only problems
https://github.com/opparco/stable-diffusion-webui-two-shothttps://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper https://github.com/opparco/stable-diffusion-webui-composable-lora https://github.com/thomasasfk/sd-webui-aspect-ratio-helper
-
[STABLE DIFFUSION] I need help with seperating characters. (LORA)
Possibly with something like the Latent Couple extension + Composable LoRA, cf this video.
-
Make a Lora only apply to specific aspect of my image?
I think it works with Regional Prompting in combination with and Composable LoRA (both std. Auto1111-extensions). Never tried myself, though.
-
Is there someway to make Latent Couple + Composable Lora work on Vlad Automatic webUI?
So i noticed that Latent Couple and Composable Lora don't work on Vlad Automatic (something to do with those extensions beind made for a outdated version of A1111 webUI).
-
In the prompt, how to limit the description to the outfit or an object without affecting the background?
Composable LoRA Enables using AND keyword(composable diffusion) to limit LoRAs to subprompts. Useful when paired with Latent Couple extension.
-
What is the extension that makes negative Prompt not effect on Lora??? like when i type "man" that effect just on checkpoint not lora !
Its https://github.com/opparco/stable-diffusion-webui-composable-lora
-
The Joker 2019 Cosplay Lora tests WIPS
I tried using composable lora but it is hardcoded to look for the "and", plus it has certain issues that haven't been resolved..
-
Can anyone suggest ways I can use several trained characters in one picture (using LORA)?
Only one good suggestion can be had here and that is the composable lora extension. I think you need to use it together with the latent couple extension.
-
Using multiple Loras without blending faces
Oh this is super easy, just grab the Latent Couple extension and the Composable LoRA extension. Using this you can define multiple sections for different characters, then you can have a prompt/lora unique to each section/character.
paint-with-words-sd
-
paint with words with loras and multicontrolnet (will pay if needed)
I am refering to this btw: https://github.com/cloneofsimo/paint-with-words-sd
-
More control than ControlNet - code is out for MultiDiffusion Region Control, a prompt on each mask
This essentially supercharges the Nvidia eDiffi / SD paint-with-words attempts done for the same thing previously.
- "Segmentation" ControlNet preprocessor options
-
I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
There is stable diffusion paint with words GitHub which probably does exactly this, but no UI ever: https://github.com/cloneofsimo/paint-with-words-sd
-
What do you think will be added/created next?
personally i want to see the ediffi paint w/words stable diffusion extension https://github.com/cloneofsimo/paint-with-words-sd/commit/789419e3a34f43a1454df5a940020cfa531fbc63 that clonesofimo was working on before he stopped
- Will models have to be retrained for when this feature is eventually added into SD?
-
Paint with words (aka NVIDIA eDiff-I)
Just found there is a repo for an NVIDIA eDiff-I style img2img workflow for Stable Diffusion. For those unfamiliar, this lets you specify where parts of your text prompt should be placed in the image giving you much greater control on the composition e.g.
-
Different Models = Different prompts?
Paint-with-Words might eventually allow something along those lines, but it's a bit awkward to use now, and AFAIK you still get bleedthrough between multiple human subjects.
-
eDiff-I: A new Text-to-Image Diffusion Model with Ensemble of Expert Denoisers
someone attempted something like paint with words but I think Nvidia's version is better looking.
- Paint with words? What is next? Hope this gets to be a module in automatic 1111 soon.
What are some alternatives?
adetailer - Auto detecting, masking and inpainting with detection model.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-webui-regional-prompter - set prompt to divided region
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API 🐠
ComfyUI_Cutoff - cutoff implementation for ComfyUI
LECO - Low-rank adaptation for Erasing COncepts from diffusion models.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
Rerender_A_Video - [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
Stable-Diffusion-Webui-Civitai-Helper - Stable Diffusion Webui Extension for Civitai, to manage your model much more easily.
openOutpaint-webUI-extension - direct A1111 webUI extension for openOutpaint
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
daam - Diffusion attentive attribution maps for interpreting Stable Diffusion.