sd-extension-system-info
stable-diffusion-webui-directml
sd-extension-system-info | stable-diffusion-webui-directml | |
---|---|---|
51 | 74 | |
258 | 1,551 | |
- | - | |
9.3 | 9.9 | |
3 months ago | 9 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-extension-system-info
- RTX 4070 vs rx 7800 xt
-
AMD for AI
I've been using both SD and various LLM on linux without any issue and have done so for months. Windows support is also starting to roll out slowly, with koboldcpp-rocm recently giving me 20-25+t/s for a13B even on windows. you can see what SD performance is like on sites like these. those numbers roughly match what i get on my RX6800 as well (8t/s).
-
Stable Diffusion in pure C/C++
That seems a lot worse than a 2060 SUPER with PyTorch in A1111.
https://vladmandic.github.io/sd-extension-system-info/pages/... (search for 2060 SUPER)
-
Iterations per second benchmarking question
But usually A1111 users use benchmark on this extension https://github.com/vladmandic/sd-extension-system-info
-
Best AMD SD Guide for 2023?
AMD SD = Setup Diaster? it was quite troublesome googling the few linux/amdgpu/rocm/sd vers/configs/params posts online. Also the whole PC may hang during generation which is bad for the harddisk. Your card is way more powerful so may not hang like mine. People are getting 8it/s https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
-
Which one is better? Nvidia Tesla M40 vs Nvidia Tesla P4?
According to system info benchmark, M40 is like 1-2 it/s and P4 is barely better than that.
- Video card price/performance ratio
-
--medvram. Should I remove this flag? Running 3090
Anyway to properly "benchmark" the impacts different switches on your image generation speed, it is better to use the benchmarking utility from extension https://github.com/vladmandic/sd-extension-system-info (it also creates a very handy table of results from other users at https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html for you to compare with.
-
Searching for install guide for top performance setup on WSL2 (Automatic1111)
I can see that the top performance benchmark results on SD WebUI Benchmark Data (using RTX 4090), are obtained through WSL2 running Automatic1111 on a Linux dist and Python 3.10.11, along with PyTorch 2.1.0.dev+cu121 (like benchmark id: 4126)
-
Advice for Optimization on an RTX 8000
You should be able to compare based on the published benchmarks, just replicate the settings based on what's reported https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
stable-diffusion-webui-directml
- stable diffusion compliant with amd gpu or not?
-
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!
I'm getting this error using (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues), freshly installed. I'm running it on an AMD RX 6700 XT with 12gb of vram. Generating a single image at default settings (512x512, 20 steps, etc.) I can do simple prompts (i.e. "kitty cat") but as soon as I add a couple more tags, I get the aforementioned error message, usually 20-30% into generating an image. I went through this thread (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38) and tried every solution I saw, most of them being variations of adding --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check to the commandline arguments. What else might I be able to try? Thanks.
-
Best AMD SD Guide for 2023?
i use automatic 1111 you can find the installation on the github , this branch https://github.com/lshqqytiger/stable-diffusion-webui-directml and it works fine although the speed is what it is, i also have a old GPU.
-
Just how much VRAM do I need? It keeps saying I don't have enough with a 7900xt.
I'm using this one: https://github.com/lshqqytiger/stable-diffusion-webui-directml
-
I am confused regarding same seed = same picture. Any explanations or insights? The journey for this in comments.
- https://github.com/lshqqytiger/stable-diffusion-webui-directm - starting webuser-ui with COMMANDLINE_ARGS=--opt-sub-quad-attention –disable-nan-check - AMD 8GB Radeon Pro WX7100
- ¿Quién fue a la marcha contra la IA en el Obelisco? Cuenten cómo estuvo
-
StableDiffusion will only use my CPU?
I'm running this fork (https://github.com/lshqqytiger/stable-diffusion-webui-directml) on a pc with a Ryzen 5700x and a Radeon RX 6700 XT 12 GB Video Card.
-
(AMD) Random Running Out of Memory Error After Generation
I am using direct-ml fork
-
Stable Diffusion on AMD 6900XT is Super Slow
Im running Stable diffusion on my 6900XT, and I feel like its way slower than normal. Using the updated Webui https://github.com/lshqqytiger/stable-diffusion-webui-directml.
-
Stable Diffusion DirectML on AMD APU only (no external GPU) - Ram Usage?
This refers to the use of iGPUs (example: Ryzen 5 5600G). No graphic card, only an APU. The DirectML Fork of Stable Diffusion (SD in short from now on) works pretty good with AMD. But not only with the GPUs, but also with only-APUs without GPUs.
What are some alternatives?
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
SHARK - SHARK - High Performance Machine Learning Distribution
tomesd - Speed up Stable Diffusion with this one simple trick!
StableDiffusionUI - Stable Diffusion UI: Diffusers (CUDA/ONNX)
voltaML-fast-stable-diffusion - Beautiful and Easy to use Stable Diffusion WebUI
sd-webui-controlnet - WebUI extension for ControlNet
scribble-diffusion - Turn your rough sketch into a refined image using AI
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
stable-diffusion-webui - Stable Diffusion web UI
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️
OnnxDiffusersUI - UI for ONNX based diffusers