stable-diffusion-webui-sonar
adetailer
stable-diffusion-webui-sonar | adetailer | |
---|---|---|
3 | 43 | |
113 | 3,756 | |
- | - | |
5.1 | 9.5 | |
7 months ago | 7 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-sonar
-
What's your favorite small tweaks to make? I'll go first
Haven’t tried it but it’s here.
-
Where can I creat variations of a previously generated image?
If you want to explore variants of an image, your best bet would be Sonar -> https://github.com/Kahsolt/stable-diffusion-webui-sonar/
-
Are you people able to follow all this stuff up?
This is sonar https://github.com/Kahsolt/stable-diffusion-webui-sonar
adetailer
- WHY IS THIS HAPPENING MAKE IT STOP I HATE IT
-
Launch HN: Rubbrband (YC W23) – Deformity detection for AI-generated images
https://github.com/Bing-su/adetailer works very well on faces and hands.
This is a good solution for all the use cases I’ve dealt with.
Are there widespread use cases where, knowing a deformity exists is required than just fixing the deformities?
- Automatically skip GFPGAN on faces that are "too low quality"
-
How to diffuse better faces?
Ive found using ADetailer (https://github.com/Bing-su/adetailer, using their reccomended advanced settings and face_yolov8n.pt) and Dynamic Thresholding (CFG set to 12 and Mimic to 7) has vastly improved my face renders. (https://github.com/mcmonkeyprojects/sd-dynamic-thresholding) GL!
- Is there anything which automatically recognizes when something was ill-generated (such as a face, or fingers etc.) and automatically applies the inpainting mask to that area?
-
In the Automatic1111 Web UI, is it possible to get ADetailer working inside Deforum?
ADetailer: https://github.com/Bing-su/adetailer
- Mind-blowing results (LORA/Checkpoint mix)
-
Faces look strange with multiple people
After Detailer will give each of their faces a facelift
-
A little bit of party after fighting each other in Smash bros (Text2img, controlnet, regional prompter, adetailer)
this is amazing and easy workflow for production, but the lora part of adetailer never works for me, i don't know why
- Creating a randomized crowd with various expression through txt2img with adetailer + dynamic prompt extension
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
ComfyUI_Cutoff - cutoff implementation for ComfyUI
sd-webui-prompt-all-in-one - This is an extension based on sd-webui, aimed at improving the user experience of the prompt/negative prompt input box. It has a more intuitive and powerful input interface function, and provides automatic translation, history record, and bookmarking functions. 这是一个基于 sd-webui 的扩展,旨在提高提示词/反向提示词输入框的使用体验。它拥有更直观、强大的输入界面功能,它提供了自动翻译、历史记录和收藏等功能。
ddetailer
sd-webui-neutral-prompt - Collision-free AND keywords for a1111 webui!
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
stable-diffusion-webui-daam - DAAM for Stable Diffusion Web UI
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
ultimate-upscale-for-automatic1111
sd-webui-prompt-format - An Extension for Automatic1111 Webui that helps cleaning up prompts
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation