T2I-Adapter
Uni-ControlNet
T2I-Adapter | Uni-ControlNet | |
---|---|---|
25 | 5 | |
3,158 | 508 | |
2.9% | - | |
7.9 | 5.3 | |
6 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
T2I-Adapter
-
Help me understand ControlNet vs T2I-adapter vs CoAdapter
I've found some documentation here https://github.com/TencentARC/T2I-Adapter/blob/SD/docs/coadapter.md
-
Color-Diffusion: using diffusion models to colorize black and white images
Yeah, if you have a high res image, you can get color info at super low-res and then regenerate the colors at high res with another model. (though this isn't an efficient approach at all)
https://github.com/TencentARC/T2I-Adapter
i've also seen a controlnet do this.
- Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
-
Reflected Diffusion Models
https://github.com/TencentARC/T2I-Adapter
It works with the Mikubill ControlNet plugin for A1111.
- Is it possible to replace objects with an already segmented image by ControlNet?
-
ControlNet v1.1 has been released
These are from Tencent: https://github.com/TencentARC/T2I-Adapter
-
Can someone explain some of these newer controlnet models and preprocessors? Clipvision? Color? Pidinet? Binary?
I think they're for T2I-adapter models, which can be downloaded here.
- T2I-Adapter: Text-to-Image Models with Unprecedented Control
-
How do I combine two images using AUTOMATIC1111?
Apart from Controlnet, T2I Adapter works quite well for this. https://github.com/TencentARC/T2I-Adapter
- T2IAdapter creates Coadapter(inspired by Composer)
Uni-ControlNet
-
Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
Code: https://github.com/ShihaoZhaoZSH/Uni-ControlNet
- Uni-ControlNet is a novel controllable diffusion model that allows for the simultaneous utilization of different local controls and global controls in a flexible and composable manner within one model
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
ControlNet - Let us control diffusion models!
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
Color-diffusion - A diffusion model to colorize black and white images
Latent-Paint-Mesh - NVDiffrast based implementation of Latent-Paint
ControlNet-v1-1-nightly - Nightly release of ControlNet 1.1
SPADE - Semantic Image Synthesis with SPADE
stable-dreamfusion - Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.