tomesd
stable-diffusion-webui-directml
tomesd | stable-diffusion-webui-directml | |
---|---|---|
18 | 3 | |
1,207 | 4 | |
- | - | |
5.4 | 9.6 | |
5 months ago | 6 months ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tomesd
-
List of all the ways to improve performance for stable diffusion.
They show up to 5.4 times greater: you can see his results in the image on the github repo here: https://github.com/dbolya/tomesd
-
Question about automatic1111 set up after changing gpu
Another optimization extension you can use as well is token merging which has reported around 5.4x faster image generation.
- +39%~51% faster at the cost of some details? ToMe officially arrives to Auto1111's webui v1.3.0
-
AUTOMATIC1111 updated to 1.3.0 version
It merges redundant tokens: https://github.com/dbolya/tomesd So it can make the generation slightly faster.
-
I made some changes in AUTOMATIC1111 SD webui, faster but lower VRAM usage
Mods patched - Tomesd - Pillow-SIMD - OpenCV-CUDA (WIP) - Removed some unused imports and startup checking - Improved performance with reduced VRAM usage (tested on txt2img only) - Added a new option to use external RealESRGAN with --external-realesrgan
-
Honest question, how are people getting ~35-40 it/sec on 4090? My spits 20 at most
Were the 40 it/s perhaps achieved with ToMe?
-
Vlad diffusion keeps growing. Big thanx to all supporters :)
Done! Proposal
-
Token Merging actually works and reduces generation time as well as RAM
This feature comes from this project: https://github.com/dbolya/tomesd
-
How can I squeeze every ounce of performance from web UI?
GitHub - dbolya/tomesd: Speed up Stable Diffusion with this one simple trick!
- Token Merging for Fast Stable Diffusion
stable-diffusion-webui-directml
-
AUTOMATIC1111 updated to 1.3.0 version
You can use the DirectML version right here
-
Very new to this could someone help me with installing Stable Diffusion?
If you have a Radeon card and using Windows you're supposed use the directml version.
-
I don't know how it works, but some koreans find a way to run automatic1111's webui on windows with rx 5700xt
references : https://arca.live/b/aiart/67595800, https://github.com/hgrsikghrd/stable-diffusion-webui-directml
What are some alternatives?
stable-diffusion-webui-ux - Stable Diffusion web UI UX
stable-diffusion-webui - Stable Diffusion web UI
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-tensorrt
gradio_fakeimage - Hacking a non component image into gradio
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
sd-extension-system-info - System and platform info and standardized benchmarking extension for SD.Next and WebUI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation