multidiffusion-upscaler-for-automatic1111
sd-webui-cutoff
multidiffusion-upscaler-for-automatic1111 | sd-webui-cutoff | |
---|---|---|
83 | 45 | |
4,459 | 1,167 | |
- | - | |
7.8 | 5.9 | |
about 1 month ago | 6 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multidiffusion-upscaler-for-automatic1111
- Stable Diffusion can't stop generating extra torsos, even with negative prompt. Any suggestions?
-
Reduce Or Remove The Use Of RAM In Image Generation
Use tiled VAE, it will save VRAM: pkuliyi2015/multidiffusion-upscaler-for-automatic1111: Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0 (github.com)
-
How I do I fix these boxes/lines appearing while using Ultimate SD upscale + CN tiles? All the details are in my comment below. Please Helps. MANY THANKS !!!
My favorite solution is to not use ultimate Sd upscale and instead use multidiffusion-upscaler.
- Is there any way to purge the VRAM of your card after getting OOT'ed other than restarting the Web UI?
-
Not able to generate more than 400*400 image
sure, i personally use 'tiled diffusion' https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111, works like charm also use adetailer for faces if its needed.
-
GTX 1070 slow render speeds
What worked for my 1080 was using TiledVAE and turning down the quality of my previews - I don't pay much attention to it/s but it's definitely faster than using --medvram, and now I can handle batches and large resolutions without things exploding on me.
-
Initial release of A8R8 (Alternate Reality), an opinionated interface for Stable Diffusion image generation, works with A1111. Docker installation included. Open source and runs locally!
I would highly recommend adding https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 to your A1111 installation, TiledVAE is enabled automatically under the hood in A8R8; this will allow you to get even larger generations before getting an out of memory error. You'll get a Tiled Diffusion checkbox with some reasonable hardcoded defaults as well.
- I love the Tile ControlNet, but it's really easy to overdo. Look at this monstrosity of tiny detail I made by accident.
-
Can you generate 2048x2048 images with an 8GB GPU?
Use Tiled VAE https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
-
SDXL 0.9 vs SD 2.1 vs SD 1.5 (All base models) - Batman taking a selfie in a jungle, 4k
That's weird. 10GB should allow you to hires to 2048x2048 at least. Use Tiled VAE extension https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 that will allow you to go even beyond that.
sd-webui-cutoff
-
Strategies for avoiding keyword leakage
I've used the cutoff extension before to help limit prompt bleeding. It's not going to work 100% of the time though, but in my experience it does help make cleaner items more frequent. Personally, I don't use it much in my workflow, but it's nice to have for the times when I need it.
- Colors on the wrong stuff
-
Overwhelmed by new extension list in Web UI after updating to latest A1111
Cutoff - Prevents concepts like color from bleeding into other parts of prompt. For ex, you know how using "blue eyes" may also make clothing or hair blue? This mitigates that.
-
Multiple people question
There's an extension called Cutoff that's intended to help localize the influence of prompts. I've had rather equivocal success with it, so I can't at this time wholeheartedly endorse it, but it's certainly worth trying.
-
Ya'll are gonna get sick of me lol. I am trying to use Target Tokens....
https://github.com/hnmr293/sd-webui-cutoff (prevents color-contamination eg. eye-color leaking into hair color)
-
Controlnet reference+lineart model works so great!
The get the cut off extensions, enable that https://github.com/hnmr293/sd-webui-cutoff
-
How do I describe an object without those properties being applied to a different part of my image.
This extention might be what you're looking for.
- [Stable Diffusion] Toujours la même couleur de vêtements sur le personnage. C'est maintenant possible.
-
Is there a sd client or way to have the cpu take over when the gpu is out of memory?
The ones I use most often, in conjunction with ControlNet, are Regional Prompter, and Cutoff.
-
I keep getting Caucasian skin even though am I specifying obsidian skin or black... yellow eyes instead of lavender...
Then use Cutoff to reduce color contamination when you reference multiple colors(I recommend weight 2 with Cutoff Strongly disabled). https://github.com/hnmr293/sd-webui-cutoff
What are some alternatives?
ultimate-upscale-for-automatic1111
a1111-sd-webui-tome - ToMe extension for Stable Diffusion A1111 WebUI
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
ComfyUI_TiledKSampler - Tiled samplers for ComfyUI
kohya_ss
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
a1111-sd-webui-tagcomplete - Booru style tag autocompletion for AUTOMATIC1111's Stable Diffusion web UI
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
adetailer - Auto detecting, masking and inpainting with detection model.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
stable-diffusion-webui-state - Stable Diffusion extension that preserves ui state