paint-with-words-sd
openOutpaint
paint-with-words-sd | openOutpaint | |
---|---|---|
13 | 26 | |
618 | 486 | |
- | - | |
5.2 | 8.0 | |
about 1 year ago | 16 days ago | |
Jupyter Notebook | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
paint-with-words-sd
-
paint with words with loras and multicontrolnet (will pay if needed)
I am refering to this btw: https://github.com/cloneofsimo/paint-with-words-sd
-
More control than ControlNet - code is out for MultiDiffusion Region Control, a prompt on each mask
This essentially supercharges the Nvidia eDiffi / SD paint-with-words attempts done for the same thing previously.
- "Segmentation" ControlNet preprocessor options
-
I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
There is stable diffusion paint with words GitHub which probably does exactly this, but no UI ever: https://github.com/cloneofsimo/paint-with-words-sd
-
What do you think will be added/created next?
personally i want to see the ediffi paint w/words stable diffusion extension https://github.com/cloneofsimo/paint-with-words-sd/commit/789419e3a34f43a1454df5a940020cfa531fbc63 that clonesofimo was working on before he stopped
- Will models have to be retrained for when this feature is eventually added into SD?
-
Paint with words (aka NVIDIA eDiff-I)
Just found there is a repo for an NVIDIA eDiff-I style img2img workflow for Stable Diffusion. For those unfamiliar, this lets you specify where parts of your text prompt should be placed in the image giving you much greater control on the composition e.g.
-
Different Models = Different prompts?
Paint-with-Words might eventually allow something along those lines, but it's a bit awkward to use now, and AFAIK you still get bleedthrough between multiple human subjects.
-
eDiff-I: A new Text-to-Image Diffusion Model with Ensemble of Expert Denoisers
someone attempted something like paint with words but I think Nvidia's version is better looking.
- Paint with words? What is next? Hope this gets to be a module in automatic 1111 soon.
openOutpaint
-
Question: Any tips for generating really wide wallpapers for multi-monitor setup?
I've used OpenOutpaint to build up my base image. It lets you generate an image of any size tile by tile, allowing you to change the prompt as needed or regenerate a tile. It also lets you regenerate a region via img2img to ensure coherency.
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- Any extensions/add ons to zoom in on inpaint?
-
Has anyone found a way to make OpenOutpaint less laggy?
Write a bug report: https://github.com/zero01101/openOutpaint/issues
-
Inpainting workflow for high rez
Maybe i am wrong here, but since here is an extension already existing https://github.com/zero01101/openOutpaint why some people still use basic img2img? It can draw amazing masks, zoom-in and zoom-out, erasing the mask etc. Just try it out.
-
InvokeAI 2.3.0 update. Safetensor and Diffusers Support
It was unique a great tool when It first came out. But Now I use OpenOutpaint extension for A1111 - https://github.com/zero01101/openOutpaint/wiki/Manual It a bit less comfortable to use, bit I can use it in A1111 and it's a plus. I also tried the StableART extension for Photohop - https://github.com/isekaidev/stable.art , I like the forkflow cuz it's in Photoshop and I am used to it interface and tools, it even forced me to upgrade my PS to latest one, but somewhy it generates worse quality images that A1111 and openOutpaint, and it has only basic A1111 functions. But I used it only for couple days, need to comapre more. :-)
- Some one ask me for detail inpaint guide hope this help you get some idea.
- I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
-
Looking for that Infinite painting with AI program
OpenOutpaint comes as a standalone as well as an extension for AUTOMATIC1111.
-
OpenOutpaint - a better way to do inpainting & outpainting in Automatic1111!
there is one, but having my sister-in-law "beta test" it as someone who'd never heard of stable diffusion before, it really needs work ;)
What are some alternatives?
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
LECO - Low-rank adaptation for Erasing COncepts from diffusion models.
openOutpaint-webUI-extension - direct A1111 webUI extension for openOutpaint
Rerender_A_Video - [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stablediffusion-infinity - Outpainting with Stable Diffusion on an infinite canvas
daam - Diffusion attentive attribution maps for interpreting Stable Diffusion.
Hua - Hua is an AI image editor with Stable Diffusion (and more).
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
a1111-sd-webui-haku-img - An Image utils extension for A1111's sd-webui