stable-diffusion-webui-directml
stable-diffusion-webui-ux
stable-diffusion-webui-directml | stable-diffusion-webui-ux | |
---|---|---|
74 | 30 | |
1,577 | 944 | |
- | - | |
9.9 | 9.9 | |
7 days ago | 30 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-directml
- stable diffusion compliant with amd gpu or not?
-
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!
I'm getting this error using (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues), freshly installed. I'm running it on an AMD RX 6700 XT with 12gb of vram. Generating a single image at default settings (512x512, 20 steps, etc.) I can do simple prompts (i.e. "kitty cat") but as soon as I add a couple more tags, I get the aforementioned error message, usually 20-30% into generating an image. I went through this thread (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38) and tried every solution I saw, most of them being variations of adding --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check to the commandline arguments. What else might I be able to try? Thanks.
-
Best AMD SD Guide for 2023?
i use automatic 1111 you can find the installation on the github , this branch https://github.com/lshqqytiger/stable-diffusion-webui-directml and it works fine although the speed is what it is, i also have a old GPU.
-
Just how much VRAM do I need? It keeps saying I don't have enough with a 7900xt.
I'm using this one: https://github.com/lshqqytiger/stable-diffusion-webui-directml
-
I am confused regarding same seed = same picture. Any explanations or insights? The journey for this in comments.
- https://github.com/lshqqytiger/stable-diffusion-webui-directm - starting webuser-ui with COMMANDLINE_ARGS=--opt-sub-quad-attention –disable-nan-check - AMD 8GB Radeon Pro WX7100
- ¿Quién fue a la marcha contra la IA en el Obelisco? Cuenten cómo estuvo
-
StableDiffusion will only use my CPU?
I'm running this fork (https://github.com/lshqqytiger/stable-diffusion-webui-directml) on a pc with a Ryzen 5700x and a Radeon RX 6700 XT 12 GB Video Card.
-
(AMD) Random Running Out of Memory Error After Generation
I am using direct-ml fork
-
Stable Diffusion on AMD 6900XT is Super Slow
Im running Stable diffusion on my 6900XT, and I feel like its way slower than normal. Using the updated Webui https://github.com/lshqqytiger/stable-diffusion-webui-directml.
-
Stable Diffusion DirectML on AMD APU only (no external GPU) - Ram Usage?
This refers to the use of iGPUs (example: Ryzen 5 5600G). No graphic card, only an APU. The DirectML Fork of Stable Diffusion (SD in short from now on) works pretty good with AMD. But not only with the GPUs, but also with only-APUs without GPUs.
stable-diffusion-webui-ux
- Wildly unpredictable faces quality
-
Alternative to Canvas Zoom Extension?
If you're open to trying a fork of auto1111, I'd recommend using the Ui-UX fork of auto1111 by anapnoe. It has a big ui overhaul, one item of which is improved inpainting. Has move, zoom, and full a screen mode. Take a look at the screenshots and see if that seems to fit your need.
-
Better mobile Inpainting from locally hosted Stable Diffusion?
I'd highly recommend using the Ui-UX fork of auto1111 by anapnoe. It is designed with mobile in mind, and has a custom interface for img2img and inpainting.
- SDFX : New UI for Stable Diffusion
- Is there any interest in a mobile companion app to go with automatic1111?
-
AUTOMATIC1111 updated to 1.3.0 version
If you want more than that, the only thing I know of is a fork that needs a fresh install: https://github.com/anapnoe/stable-diffusion-webui-ux
-
What is the text-to-image AI tool?
https://github.com/vladmandic/automatic or https://github.com/anapnoe/stable-diffusion-webui-ux
- What web UI should I go for for SD generation? I currently use Easy Diffusion 2.5, but I’ve been meaning to switch since I just got a GeForce RTX 3070. Any good recommendations?
-
webui.bat closes instantly?
Yes that's the fork I'm using, it's got a great UX, and I much preferred it when it was working!
-
What's your opinion on InvokeAI compared to Automatic1111
Seriously, if anyone wants to use Auto1111 (which supports all the important SD "extras") but are turned off by the clunky interface, try this fork: https://github.com/anapnoe/stable-diffusion-webui-ux. It has a much better UI, runs nearly every Auto1111 extension without issues AND only merges stable commits. If you were already considering InvokeAI (which is in itself a great downscaled SD environment) then try Anapnoe's webui. It's closer to bleeding edge, but without the actual bleeding.
What are some alternatives?
SHARK - SHARK - High Performance Machine Learning Distribution
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
StableDiffusionUI - Stable Diffusion UI: Diffusers (CUDA/ONNX)
a1111-nevysha-comfy-ui - A collection of tweak to improve Auto1111 UI//UX [Moved to: https://github.com/Nevysha/Cozy-Nest]
sd-webui-controlnet - WebUI extension for ControlNet
sd-web-ui-kitchen-theme - 🧿 Kitchen theme for stable diffusion webui [Moved to: https://github.com/canisminor1990/sd-webui-kitchen-theme]
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion-webui - Stable Diffusion web UI
Visual Studio Code - Visual Studio Code
OnnxDiffusersUI - UI for ONNX based diffusers
StableStudio - Community interface for generative AI