stable-diffusion-webui-directml
stablediffusion-directml
stable-diffusion-webui-directml | stablediffusion-directml | |
---|---|---|
74 | 6 | |
1,577 | 41 | |
- | - | |
9.9 | 3.6 | |
6 days ago | about 1 year ago | |
Python | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-directml
- stable diffusion compliant with amd gpu or not?
-
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!
I'm getting this error using (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues), freshly installed. I'm running it on an AMD RX 6700 XT with 12gb of vram. Generating a single image at default settings (512x512, 20 steps, etc.) I can do simple prompts (i.e. "kitty cat") but as soon as I add a couple more tags, I get the aforementioned error message, usually 20-30% into generating an image. I went through this thread (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38) and tried every solution I saw, most of them being variations of adding --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check to the commandline arguments. What else might I be able to try? Thanks.
-
Best AMD SD Guide for 2023?
i use automatic 1111 you can find the installation on the github , this branch https://github.com/lshqqytiger/stable-diffusion-webui-directml and it works fine although the speed is what it is, i also have a old GPU.
-
Just how much VRAM do I need? It keeps saying I don't have enough with a 7900xt.
I'm using this one: https://github.com/lshqqytiger/stable-diffusion-webui-directml
-
I am confused regarding same seed = same picture. Any explanations or insights? The journey for this in comments.
- https://github.com/lshqqytiger/stable-diffusion-webui-directm - starting webuser-ui with COMMANDLINE_ARGS=--opt-sub-quad-attention –disable-nan-check - AMD 8GB Radeon Pro WX7100
- ¿Quién fue a la marcha contra la IA en el Obelisco? Cuenten cómo estuvo
-
StableDiffusion will only use my CPU?
I'm running this fork (https://github.com/lshqqytiger/stable-diffusion-webui-directml) on a pc with a Ryzen 5700x and a Radeon RX 6700 XT 12 GB Video Card.
-
(AMD) Random Running Out of Memory Error After Generation
I am using direct-ml fork
-
Stable Diffusion on AMD 6900XT is Super Slow
Im running Stable diffusion on my 6900XT, and I feel like its way slower than normal. Using the updated Webui https://github.com/lshqqytiger/stable-diffusion-webui-directml.
-
Stable Diffusion DirectML on AMD APU only (no external GPU) - Ram Usage?
This refers to the use of iGPUs (example: Ryzen 5 5600G). No graphic card, only an APU. The DirectML Fork of Stable Diffusion (SD in short from now on) works pretty good with AMD. But not only with the GPUs, but also with only-APUs without GPUs.
stablediffusion-directml
-
Automatic1111 for Intel Arc (A380 Tested)
stable-diffusion-stability-ai (Directml version)
-
Stable Diffusion on AMD APUs
https://github.com/lshqqytiger/k-diffusion-directml/tree/master --->this will need to be named k-diffusion https://github.com/lshqqytiger/stablediffusion-directml/tree/main ----> this will need to be renamed stable-diffusion-stability-ai
-
What am I doing wrong? Inpainting reverts to original...
I'm using the directml fork of the Automatic1111 web gui on my AMD RX 6800 XT , and it seems to work fine with txt2img, but my attempts at inpainting to fix up the faces is getting me nowhere. This isn't the browser issue, in that the faces are being shown as being edited in the preview, but they successively converge on the original bad image that's masked... I've attached a youtube clip of how it goes for me. Help!
-
Man I wish I could do all this cool shit too
Download k-diffusion and stablediffusion folders. (click green button "Code" and download as ZIP). Go to the folder you installed in step 1 and browse to repositories, extract these two folders there. Rename them to k-diffusion and stable-diffusion-stability-ai. If you already have these folders delete them first.
-
SD made me regret buying an AMD card.
There is couple options, easy one for windows is this fork https://github.com/lshqqytiger/stablediffusion-directml you don't need to convert models to onyxxx is simply a1111 using directml so you can use all features like controlnet, but will be probably slower than shark or linux a1111 with rocm (why the hell is there no rocm for windows :/), tbh If I were you, I'd probably try to sell the card and buy, for example, a 3060 12GB
-
Intel Arc Stable difussion?
3) install k-diffusion-directml and stablediffusion-directm under ..\stable-diffusion-webui-arc-directml-master\repositories (tutorial)
What are some alternatives?
SHARK - SHARK - High Performance Machine Learning Distribution
StableDiffusionUI - Stable Diffusion UI: Diffusers (CUDA/ONNX)
CodeFormer - [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
sd-webui-controlnet - WebUI extension for ControlNet
stable-diffusion-webui-colab - stable diffusion webui colab
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-arc-directml - A proven usable Stable diffusion webui project on Intel Arc GPU with DirectML
OnnxDiffusersUI - UI for ONNX based diffusers
k-diffusion-directml - Karras et al. (2022) diffusion models for PyTorch