tomesd
diffusers
tomesd | diffusers | |
---|---|---|
18 | 266 | |
1,207 | 22,543 | |
- | 2.3% | |
5.4 | 9.9 | |
5 months ago | 5 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tomesd
-
List of all the ways to improve performance for stable diffusion.
They show up to 5.4 times greater: you can see his results in the image on the github repo here: https://github.com/dbolya/tomesd
-
Question about automatic1111 set up after changing gpu
Another optimization extension you can use as well is token merging which has reported around 5.4x faster image generation.
- +39%~51% faster at the cost of some details? ToMe officially arrives to Auto1111's webui v1.3.0
-
AUTOMATIC1111 updated to 1.3.0 version
It merges redundant tokens: https://github.com/dbolya/tomesd So it can make the generation slightly faster.
-
I made some changes in AUTOMATIC1111 SD webui, faster but lower VRAM usage
Mods patched - Tomesd - Pillow-SIMD - OpenCV-CUDA (WIP) - Removed some unused imports and startup checking - Improved performance with reduced VRAM usage (tested on txt2img only) - Added a new option to use external RealESRGAN with --external-realesrgan
-
Honest question, how are people getting ~35-40 it/sec on 4090? My spits 20 at most
Were the 40 it/s perhaps achieved with ToMe?
-
Vlad diffusion keeps growing. Big thanx to all supporters :)
Done! Proposal
-
Token Merging actually works and reduces generation time as well as RAM
This feature comes from this project: https://github.com/dbolya/tomesd
-
How can I squeeze every ounce of performance from web UI?
GitHub - dbolya/tomesd: Speed up Stable Diffusion with this one simple trick!
- Token Merging for Fast Stable Diffusion
diffusers
- StableDiffusionSafetyChecker
- 🧨 diffusers 0.24.0 is out with Kandinsky 3.0, IP Adapters, and others
-
What am I missing here? wheres the RND coming from?
I'm missing something about the random factor, from the sample code from https://github.com/huggingface/diffusers/blob/main/README.md
-
T2IAdapter+ControlNet at the same time
Hey people, I noticed that combining these two methods in a single forward pass increases the controllability of the generation quite a bit. I was kind of puzzled that sometimes ControlNet yielded better results than T2IAdapter for some cases, and sometimes it was the other way around, so I decided to test both at the same time, and results were quite nice. Some visuals and more motivation here: https://github.com/huggingface/diffusers/issues/5847 And it was already merged here: https://github.com/huggingface/diffusers/pull/5869
-
Won't you benchmark me?
Open Parti Prompts: The better way to evaluate diffusion models (repo)
-
kohya_ss error. How do I solve this?
You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
- Making a ControlNet inpaint for sdxl
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
For developers, TensorRT support also exists for the diffusers library via community pipelines. [1] It's limited, but if you're only supporting a subset of features, it can help.
In general, these insane speed boosts comes at the cost of bleeding edge features.
[1] https://github.com/huggingface/diffusers/blob/28e8d1f6ec82a6...
-
Mysterious weights when training UNET
I was training sdxl UNET base model, with the diffusers library, which was going great until around step 210k when the weights suddenly turned back to their original values and stayed that way. I also tried with the ema version, which didn't change at all. I also looked at the tensor's weight values directly which confirmed my suspicions.
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Merging LoRAs is essentially taking a weighted average of the LoRA adapter weights. It's more common in other UIs.
diffusers is working on a PR for it: https://github.com/huggingface/diffusers/pull/4473
What are some alternatives?
stable-diffusion-webui-ux - Stable Diffusion web UI UX
stable-diffusion-webui - Stable Diffusion web UI
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion-webui-tensorrt
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
invisible-watermark - python library for invisible image watermark (blind image watermark)
sd-extension-system-info - System and platform info and standardized benchmarking extension for SD.Next and WebUI
stable-diffusion-webui-directml - Stable Diffusion web UI
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.