diffusion-ui
diffusers
diffusion-ui | diffusers | |
---|---|---|
11 | 266 | |
139 | 22,646 | |
- | 2.3% | |
4.6 | 9.9 | |
25 days ago | about 22 hours ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
diffusion-ui
-
Who needs Photoshop generative AI when we have AUTO1111?
You can use diffusion-ui on top of automatic1111 for easy outpainting. Run automatic1111 with --cors-allow-origins=http://127.0.0.1:5173,https://diffusionui.com Then select the automatic1111 backend, upload an image then use the mouse scroll to zoom out.
-
Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI
I see you've got my https://github.com/leszekhanusz/diffusion-ui gui but it seems to be linked to a completely unrelated face-swapping interface?
-
Anyone using the stable-diffusion-webui repo by automatic1111 as an API?
Yes, checkout diffusion-ui
-
DiffusionUI responsive frontend working with the automatic1111 fork
I modified my GUI Stable Diffusion frontend to be able to use the automatic1111 fork as a backend.
-
Can we please make a general update on all the "most important" news/repos available?
It's not really well known at the moment but you also have my DiffusionUI interface which is based on diffusers with a backend where you can disable the nsfw filter.
- Show HN: Guided inpainting with Stable Diffusion using DiffusionUI
-
Presenting DiffusionUI, a web GUI for Stable Diffusion backends [P]
GitHub: https://github.com/leszekhanusz/diffusion-ui
-
SD in-painting hair test
One way to do it is to use diffusionui, a web interface I made in vue. Here is a crappy video demo showing the process.
-
What's the best install of Stable Diffusion right now?
If you want to do inpainting, you can try my diffusionui interface I'll add a feature soon to save the generated images in local storage in the browser. Here is a small demo
-
Having fun with my proof-of-concept of a web interface to do directed inpainting with stable-diffusion
It's still a work in progress but you can already play with it. Check it on GitHub
diffusers
- StableDiffusionSafetyChecker
- ๐งจ diffusers 0.24.0 is out with Kandinsky 3.0, IP Adapters, and others
-
What am I missing here? wheres the RND coming from?
I'm missing something about the random factor, from the sample code from https://github.com/huggingface/diffusers/blob/main/README.md
-
T2IAdapter+ControlNet at the same time
Hey people, I noticed that combining these two methods in a single forward pass increases the controllability of the generation quite a bit. I was kind of puzzled that sometimes ControlNet yielded better results than T2IAdapter for some cases, and sometimes it was the other way around, so I decided to test both at the same time, and results were quite nice. Some visuals and more motivation here: https://github.com/huggingface/diffusers/issues/5847 And it was already merged here: https://github.com/huggingface/diffusers/pull/5869
-
Won't you benchmark me?
Open Parti Prompts: The better way to evaluate diffusion models (repo)
-
kohya_ss error. How do I solve this?
You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
- Making a ControlNet inpaint for sdxl
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
For developers, TensorRT support also exists for the diffusers library via community pipelines. [1] It's limited, but if you're only supporting a subset of features, it can help.
In general, these insane speed boosts comes at the cost of bleeding edge features.
[1] https://github.com/huggingface/diffusers/blob/28e8d1f6ec82a6...
-
Mysterious weights when training UNET
I was training sdxl UNET base model, with the diffusers library, which was going great until around step 210k when the weights suddenly turned back to their original values and stayed that way. I also tried with the ema version, which didn't change at all. I also looked at the tensor's weight values directly which confirmed my suspicions.
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Merging LoRAs is essentially taking a weighted average of the LoRA adapter weights. It's more common in other UIs.
diffusers is working on a PR for it: https://github.com/huggingface/diffusers/pull/4473
What are some alternatives?
glid-3-xl-stable - stable diffusion training
stable-diffusion-webui - Stable Diffusion web UI
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stable-diffusion - A latent text-to-image diffusion model
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
diffusion-ui-backend - Backend for the diffusion-ui frontend
invisible-watermark - python library for invisible image watermark (blind image watermark)
diffusers-interpret - Diffusers-Interpret ๐ค๐งจ๐ต๏ธโโ๏ธ: Model explainability for ๐ค Diffusers. Get explanations for your generated images.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.