multidiffusion-upscaler-for-automatic1111
sd-webui-regional-prompter
multidiffusion-upscaler-for-automatic1111 | sd-webui-regional-prompter | |
---|---|---|
83 | 60 | |
4,459 | 1,382 | |
- | - | |
7.8 | 8.5 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multidiffusion-upscaler-for-automatic1111
- Stable Diffusion can't stop generating extra torsos, even with negative prompt. Any suggestions?
-
Reduce Or Remove The Use Of RAM In Image Generation
Use tiled VAE, it will save VRAM: pkuliyi2015/multidiffusion-upscaler-for-automatic1111: Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0 (github.com)
-
How I do I fix these boxes/lines appearing while using Ultimate SD upscale + CN tiles? All the details are in my comment below. Please Helps. MANY THANKS !!!
My favorite solution is to not use ultimate Sd upscale and instead use multidiffusion-upscaler.
- Is there any way to purge the VRAM of your card after getting OOT'ed other than restarting the Web UI?
-
Not able to generate more than 400*400 image
sure, i personally use 'tiled diffusion' https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111, works like charm also use adetailer for faces if its needed.
-
GTX 1070 slow render speeds
What worked for my 1080 was using TiledVAE and turning down the quality of my previews - I don't pay much attention to it/s but it's definitely faster than using --medvram, and now I can handle batches and large resolutions without things exploding on me.
-
Initial release of A8R8 (Alternate Reality), an opinionated interface for Stable Diffusion image generation, works with A1111. Docker installation included. Open source and runs locally!
I would highly recommend adding https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 to your A1111 installation, TiledVAE is enabled automatically under the hood in A8R8; this will allow you to get even larger generations before getting an out of memory error. You'll get a Tiled Diffusion checkbox with some reasonable hardcoded defaults as well.
- I love the Tile ControlNet, but it's really easy to overdo. Look at this monstrosity of tiny detail I made by accident.
-
Can you generate 2048x2048 images with an 8GB GPU?
Use Tiled VAE https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
-
SDXL 0.9 vs SD 2.1 vs SD 1.5 (All base models) - Batman taking a selfie in a jungle, 4k
That's weird. 10GB should allow you to hires to 2048x2048 at least. Use Tiled VAE extension https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 that will allow you to go even beyond that.
sd-webui-regional-prompter
-
Regional Prompting doesn't seem to be working a lot of the time
So I'm using the Regional Prompter extension https://github.com/hako-mikan/sd-webui-regional-prompter
- Dalle-3 Examples
- Stable Diffussion 1.5 Newbie Question about creating an image with 2 characters
-
"In summary, Stable Diffusion doesn’t really care about commas. But you can use them to organize your prompts for your own orderliness." (Link to quote below.) So... Is there a way to make SD care? To make it "understand" which words we put together to create meaning?
But using Automatic1111, this extension can define a region of the image where the prompt should apply: https://github.com/hako-mikan/sd-webui-regional-prompter
- Train SD for CAPTION WRITING? I'm tired of uploading hairstyle pics and got "male public hair"
- How to fix issue related to generate two guys when aspect ration isn't square?
-
A little bit of party after fighting each other in Smash bros (Text2img, controlnet, regional prompter, adetailer)
Second, install regional prompter and adetailer to automatic1111 webui.Next, go to setting>adetailer and change the "sort bounding boxes" from "none" to "left and right". This means that adetailer will inpaint our subjects starting from the very left to the right, allowing for greater control of what we want.
- What are some must-have/fun extensions or modules?
-
How to control a scene?
You can use ControlNets to control composition in various ways. You can use extensions like multidiffusion upscaler and regional prompter to control the layout of a scene. You can also inpaint details into a scene with the arrangement you want.
- Is there a way to guarantee one model in the image?
What are some alternatives?
ultimate-upscale-for-automatic1111
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
ComfyUI_TiledKSampler - Tiled samplers for ComfyUI
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)