paint-with-words-sd
openOutpaint-webUI-extension
paint-with-words-sd | openOutpaint-webUI-extension | |
---|---|---|
13 | 12 | |
618 | 387 | |
- | - | |
5.2 | 5.8 | |
about 1 year ago | 16 days ago | |
Jupyter Notebook | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
paint-with-words-sd
-
paint with words with loras and multicontrolnet (will pay if needed)
I am refering to this btw: https://github.com/cloneofsimo/paint-with-words-sd
-
More control than ControlNet - code is out for MultiDiffusion Region Control, a prompt on each mask
This essentially supercharges the Nvidia eDiffi / SD paint-with-words attempts done for the same thing previously.
- "Segmentation" ControlNet preprocessor options
-
I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
There is stable diffusion paint with words GitHub which probably does exactly this, but no UI ever: https://github.com/cloneofsimo/paint-with-words-sd
-
What do you think will be added/created next?
personally i want to see the ediffi paint w/words stable diffusion extension https://github.com/cloneofsimo/paint-with-words-sd/commit/789419e3a34f43a1454df5a940020cfa531fbc63 that clonesofimo was working on before he stopped
- Will models have to be retrained for when this feature is eventually added into SD?
-
Paint with words (aka NVIDIA eDiff-I)
Just found there is a repo for an NVIDIA eDiff-I style img2img workflow for Stable Diffusion. For those unfamiliar, this lets you specify where parts of your text prompt should be placed in the image giving you much greater control on the composition e.g.
-
Different Models = Different prompts?
Paint-with-Words might eventually allow something along those lines, but it's a bit awkward to use now, and AFAIK you still get bleedthrough between multiple human subjects.
-
eDiff-I: A new Text-to-Image Diffusion Model with Ensemble of Expert Denoisers
someone attempted something like paint with words but I think Nvidia's version is better looking.
- Paint with words? What is next? Hope this gets to be a module in automatic 1111 soon.
openOutpaint-webUI-extension
-
We say this too often, but, PS+SD is a game changer.
We do, https://github.com/zero01101/openOutpaint-webUI-extension. But I think photoshop just incorporated it and ppl who are used to photoshop are excited about it...
-
Installing A1111 extension (openOutpaint) on an M2 Macbook
git clone https://github.com/zero01101/openOutpaint-webUI-extension extensions/openOutpaint-webUI-extension
- Outpainting- What is the best model?
- Week 2: Outpainting using Inpainting
-
How to "zoom out" or add what was outside the frame (i.e., add in more around what was in the original image)?
try this extension https://github.com/zero01101/openOutpaint-webUI-extension
-
I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
There is an extension version of it that integrates it with the rest of the webui, allowing to send things back and forth. It still has its quirks, but it works pretty well https://github.com/zero01101/openOutpaint-webUI-extension
-
I can't get openoutpaint to work correctly. It will generate images for anything within the "dream" box but refuses to outpaint. Any thoughts about what's going on?
Hi, can you disable all extensions and try again? We have investigated this issue here: https://github.com/zero01101/openOutpaint-webUI-extension/issues/3 And usually it seems to be caused by extensions such as dark reader and duck duck go privacy.
-
Discussion: what do you use SD for?
To make proper stories, you also need to learn to use invokeAI resp. https://github.com/zero01101/openOutpaint-webUI-extension
-
OpenOutpaint - a better way to do inpainting & outpainting in Automatic1111!
still trying to get to the bottom of this one; if you'd be so kind as to check this thread and please chime in with:
- great news: Automatic1111 Photoshop Stable Diffusion Plugin free and open source, (check the comment)
What are some alternatives?
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API 🐠
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
LECO - Low-rank adaptation for Erasing COncepts from diffusion models.
Rerender_A_Video - [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
gimp-stable-boy - GIMP plugin for AUTOMATIC1111's Stable Diffusion WebUI
daam - Diffusion attentive attribution maps for interpreting Stable Diffusion.
Auto-Photoshop-StableDiffusion-Plugin - A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using either Automatic or ComfyUI as a backend.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.