stable-diffusion-webui-rembg
sd-webui-controlnet
stable-diffusion-webui-rembg | sd-webui-controlnet | |
---|---|---|
16 | 247 | |
1,075 | 15,979 | |
- | - | |
3.7 | 9.6 | |
27 days ago | 5 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-rembg
- Here's a resource I found very useful after generating characters and objects that I wanted to isolate as transparent images.
- My First Share - Turning my Students into Pixar Versions of Themselves :)
-
Comparing one-click solutions for removing backgrounds
Extension REMBG for Automatic1111
-
Best way to mask images automatically?
This tool will mask the output image of your generations: https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg
- HELP! What options/extensions are available for auto masking in A1111?
-
Create stickers of your dream with this LoRA
Remove background with your favourite image editing software (I used Photoshop) or rembg, for example
-
Darkest Dungeon v2 (Lora)
With the following prompt, we get a good image of Pikachu in the DD style which can then have the background made transparent with https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg
- Inpaint from a Sample Image to Fill the Mask
-
Full body LORA?
Also, this auto removes backgrounds
-
Tutorial: Creating a Consistent Character as a Textual Inversion Embedding
So, the only difference from the listed method is to add an extra step to preprocessing. Before captioning images, I used this extension to batch remove backgrounds. Then, I took those PNGs and used Photoshop to batch save them as JPEGs, resulting in cutout images of the subject with white backgrounds. I then proceeded as listed in that comment.
sd-webui-controlnet
-
OpenPose ControlNet: A Beginner's Guide
A crucial step for achieving stable diffusion controlnet settings is the installation of the controlnet extension in Google Colab. Whether on a Windows PC or Mac, installing controlnet is vital for stable diffusion of human pose details. Additionally, updating the controlnet extension is necessary to maintain stability and achieve the desired results in OpenPose model. To install the v1.1 controlnet extension, go to the “extensions” tab and install it from this URL: https://github.com/Mikubill/sd-webui-controlnet. If you already have v1 controlnets installed, delete the folder from stable-diffusion-webui/extensions/. Install the v1.
-
StyleAligned node for ComfyUI
1.1.420 Image-wise ControlNet and StyleAlign
-
PATCHFUSION is really impressive. High resolution depth maps in 16bit. I've been waiting for this. https://github.com/zhyever/PatchFusion
I opened a request thread on ControlNet GitHub you can give a support : https://github.com/Mikubill/sd-webui-controlnet/issues/2319
- Going to lose my mind at this point with this problem
- Samples of style-aligned
-
Is it possible to outpaint with SD or SDXL as easy as with photoshop? (no prompts)
It has been possible for 7 months now
- Reference Only Broken (Can someone with a working Reference Only CN upload there extension folder)
-
Web app prototype to create controlnet segmentation maps for Stable Diffusion
I sometimes use a very similar technique in Cinema4d (here is a link to a c4d file with preset materials referencing proper colors for Semantic Segmentation if any other c4d user wants to try it), but yours is a much more accessible solution as it's free and it's accessible online.
-
Dalle-3 Examples
There are models available that give you more control - in some senses, at least.
For example, you can use Stable Diffusion with 'ControlNet' [1] where for example, you can input an 'openpose' to choose the pose of people in the scene.
There's also a 'Regional Prompter' [2] which lets you use different prompts for different areas of the image, giving you some control over the composition.
You can also use 'inpainting' to regenerate select parts of your image if, for example, you don't like the shape of the clouds.
Of course this stuff isn't perfect - for example, you'll get hands with the wrong number of fingers sometimes, no matter what you specify :)
[1] https://github.com/Mikubill/sd-webui-controlnet
- ControlNet SDXL for Automatic1111-WebUI official release: sd-webui-controlnet 1.1.400
What are some alternatives?
cloth-segmentation - This repo contains code and a pre-trained model for clothes segmentation.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
rembg - Rembg is a tool to remove images background
openpose-editor - Openpose Editor for AUTOMATIC1111's stable-diffusion-webui
stable-diffusion-webui - Stable Diffusion web UI
T2I-Adapter - T2I-Adapter
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
ControlNet - Let us control diffusion models!
canvas-zoom - zoom and pan functionality
stable-diffusion-webui-colab - stable diffusion webui colab
U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."