Stable-Diffusion-ONNX-FP16
diffusers
Stable-Diffusion-ONNX-FP16 | diffusers | |
---|---|---|
10 | 266 | |
267 | 22,881 | |
- | 3.8% | |
6.8 | 9.9 | |
7 months ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion-ONNX-FP16
-
Blender 3.6 (huge AMD gains with HIP RT) reaches Beta Phase 3
There are a few versions of stable diffusion that someone ported and you can use them, I think I used https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16 with my amd card, then there's something like shark stable diffusion
-
another UI for Stable Diffusion for Windows and AMD, now with LoRA and Textual Inversions
Yes, the FP16 stuff in this latest release is a big part of that, and should tentatively support 8GB cards. I haven't pushed things quite as far as https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16 yet, but support for 4GB cards is possible and we've been discussing how it works: https://github.com/ssube/onnx-web/issues/241
- Seeking Advice on Optimizing Stable Diffusion with AMD Graphics Card
- AMD
- StableDiffusion on AMD
-
What is the cheapest Nvidia GPU that can run StableDiffusion well?
Second option is: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16
- Stabble diffusion using cpu only
- whats the best way to use stable diffusion on windows with an amd 6700xt?
- Can’t convert ckpt to ONNX folder using NMKD SD GUI
- Webui for AMD GPU / windows
diffusers
- StableDiffusionSafetyChecker
- 🧨 diffusers 0.24.0 is out with Kandinsky 3.0, IP Adapters, and others
-
What am I missing here? wheres the RND coming from?
I'm missing something about the random factor, from the sample code from https://github.com/huggingface/diffusers/blob/main/README.md
-
T2IAdapter+ControlNet at the same time
Hey people, I noticed that combining these two methods in a single forward pass increases the controllability of the generation quite a bit. I was kind of puzzled that sometimes ControlNet yielded better results than T2IAdapter for some cases, and sometimes it was the other way around, so I decided to test both at the same time, and results were quite nice. Some visuals and more motivation here: https://github.com/huggingface/diffusers/issues/5847 And it was already merged here: https://github.com/huggingface/diffusers/pull/5869
-
Won't you benchmark me?
Open Parti Prompts: The better way to evaluate diffusion models (repo)
-
kohya_ss error. How do I solve this?
You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
- Making a ControlNet inpaint for sdxl
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
For developers, TensorRT support also exists for the diffusers library via community pipelines. [1] It's limited, but if you're only supporting a subset of features, it can help.
In general, these insane speed boosts comes at the cost of bleeding edge features.
[1] https://github.com/huggingface/diffusers/blob/28e8d1f6ec82a6...
-
Mysterious weights when training UNET
I was training sdxl UNET base model, with the diffusers library, which was going great until around step 210k when the weights suddenly turned back to their original values and stayed that way. I also tried with the ema version, which didn't change at all. I also looked at the tensor's weight values directly which confirmed my suspicions.
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Merging LoRAs is essentially taking a weighted average of the LoRA adapter weights. It's more common in other UIs.
diffusers is working on a PR for it: https://github.com/huggingface/diffusers/pull/4473
What are some alternatives?
SHARK - SHARK - High Performance Machine Learning Distribution
stable-diffusion-webui - Stable Diffusion web UI
onnx-web - web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
stable-diffusion - A latent text-to-image diffusion model
Orochi
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
stylegan2-projecting-images - Projecting images to latent space with StyleGAN2.
invisible-watermark - python library for invisible image watermark (blind image watermark)
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
sd-webui-additional-networks