multidiffusion-upscaler-for-automatic1111
stable-diffusion-webui-directml
multidiffusion-upscaler-for-automatic1111 | stable-diffusion-webui-directml | |
---|---|---|
83 | 74 | |
4,459 | 1,564 | |
- | - | |
7.8 | 9.9 | |
about 1 month ago | 7 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multidiffusion-upscaler-for-automatic1111
- Stable Diffusion can't stop generating extra torsos, even with negative prompt. Any suggestions?
-
Reduce Or Remove The Use Of RAM In Image Generation
Use tiled VAE, it will save VRAM: pkuliyi2015/multidiffusion-upscaler-for-automatic1111: Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0 (github.com)
-
How I do I fix these boxes/lines appearing while using Ultimate SD upscale + CN tiles? All the details are in my comment below. Please Helps. MANY THANKS !!!
My favorite solution is to not use ultimate Sd upscale and instead use multidiffusion-upscaler.
- Is there any way to purge the VRAM of your card after getting OOT'ed other than restarting the Web UI?
-
Not able to generate more than 400*400 image
sure, i personally use 'tiled diffusion' https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111, works like charm also use adetailer for faces if its needed.
-
GTX 1070 slow render speeds
What worked for my 1080 was using TiledVAE and turning down the quality of my previews - I don't pay much attention to it/s but it's definitely faster than using --medvram, and now I can handle batches and large resolutions without things exploding on me.
-
Initial release of A8R8 (Alternate Reality), an opinionated interface for Stable Diffusion image generation, works with A1111. Docker installation included. Open source and runs locally!
I would highly recommend adding https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 to your A1111 installation, TiledVAE is enabled automatically under the hood in A8R8; this will allow you to get even larger generations before getting an out of memory error. You'll get a Tiled Diffusion checkbox with some reasonable hardcoded defaults as well.
- I love the Tile ControlNet, but it's really easy to overdo. Look at this monstrosity of tiny detail I made by accident.
-
Can you generate 2048x2048 images with an 8GB GPU?
Use Tiled VAE https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
-
SDXL 0.9 vs SD 2.1 vs SD 1.5 (All base models) - Batman taking a selfie in a jungle, 4k
That's weird. 10GB should allow you to hires to 2048x2048 at least. Use Tiled VAE extension https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 that will allow you to go even beyond that.
stable-diffusion-webui-directml
- stable diffusion compliant with amd gpu or not?
-
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!
I'm getting this error using (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues), freshly installed. I'm running it on an AMD RX 6700 XT with 12gb of vram. Generating a single image at default settings (512x512, 20 steps, etc.) I can do simple prompts (i.e. "kitty cat") but as soon as I add a couple more tags, I get the aforementioned error message, usually 20-30% into generating an image. I went through this thread (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38) and tried every solution I saw, most of them being variations of adding --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check to the commandline arguments. What else might I be able to try? Thanks.
-
Best AMD SD Guide for 2023?
i use automatic 1111 you can find the installation on the github , this branch https://github.com/lshqqytiger/stable-diffusion-webui-directml and it works fine although the speed is what it is, i also have a old GPU.
-
Just how much VRAM do I need? It keeps saying I don't have enough with a 7900xt.
I'm using this one: https://github.com/lshqqytiger/stable-diffusion-webui-directml
-
I am confused regarding same seed = same picture. Any explanations or insights? The journey for this in comments.
- https://github.com/lshqqytiger/stable-diffusion-webui-directm - starting webuser-ui with COMMANDLINE_ARGS=--opt-sub-quad-attention –disable-nan-check - AMD 8GB Radeon Pro WX7100
- ¿Quién fue a la marcha contra la IA en el Obelisco? Cuenten cómo estuvo
-
StableDiffusion will only use my CPU?
I'm running this fork (https://github.com/lshqqytiger/stable-diffusion-webui-directml) on a pc with a Ryzen 5700x and a Radeon RX 6700 XT 12 GB Video Card.
-
(AMD) Random Running Out of Memory Error After Generation
I am using direct-ml fork
-
Stable Diffusion on AMD 6900XT is Super Slow
Im running Stable diffusion on my 6900XT, and I feel like its way slower than normal. Using the updated Webui https://github.com/lshqqytiger/stable-diffusion-webui-directml.
-
Stable Diffusion DirectML on AMD APU only (no external GPU) - Ram Usage?
This refers to the use of iGPUs (example: Ryzen 5 5600G). No graphic card, only an APU. The DirectML Fork of Stable Diffusion (SD in short from now on) works pretty good with AMD. But not only with the GPUs, but also with only-APUs without GPUs.
What are some alternatives?
ultimate-upscale-for-automatic1111
SHARK - SHARK - High Performance Machine Learning Distribution
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
StableDiffusionUI - Stable Diffusion UI: Diffusers (CUDA/ONNX)
ComfyUI_TiledKSampler - Tiled samplers for ComfyUI
sd-webui-controlnet - WebUI extension for ControlNet
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
OnnxDiffusersUI - UI for ONNX based diffusers