T2I-Adapter
Latent-Paint-Mesh
T2I-Adapter | Latent-Paint-Mesh | |
---|---|---|
25 | 2 | |
3,158 | 39 | |
2.9% | - | |
7.9 | 2.5 | |
6 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
T2I-Adapter
-
Help me understand ControlNet vs T2I-adapter vs CoAdapter
I've found some documentation here https://github.com/TencentARC/T2I-Adapter/blob/SD/docs/coadapter.md
-
Color-Diffusion: using diffusion models to colorize black and white images
Yeah, if you have a high res image, you can get color info at super low-res and then regenerate the colors at high res with another model. (though this isn't an efficient approach at all)
https://github.com/TencentARC/T2I-Adapter
i've also seen a controlnet do this.
- Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
-
Reflected Diffusion Models
https://github.com/TencentARC/T2I-Adapter
It works with the Mikubill ControlNet plugin for A1111.
- Is it possible to replace objects with an already segmented image by ControlNet?
-
ControlNet v1.1 has been released
These are from Tencent: https://github.com/TencentARC/T2I-Adapter
-
Can someone explain some of these newer controlnet models and preprocessors? Clipvision? Color? Pidinet? Binary?
I think they're for T2I-adapter models, which can be downloaded here.
- T2I-Adapter: Text-to-Image Models with Unprecedented Control
-
How do I combine two images using AUTOMATIC1111?
Apart from Controlnet, T2I Adapter works quite well for this. https://github.com/TencentARC/T2I-Adapter
- T2IAdapter creates Coadapter(inspired by Composer)
Latent-Paint-Mesh
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
PaddleNLP - π Easy-to-use and powerful NLP and LLM library with π€ Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including πText Classification, π Neural Search, β Question Answering, βΉοΈ Information Extraction, π Document Intelligence, π Sentiment Analysis etc.
ControlNet - Let us control diffusion models!
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
Color-diffusion - A diffusion model to colorize black and white images
ControlNet-v1-1-nightly - Nightly release of ControlNet 1.1
SPADE - Semantic Image Synthesis with SPADE
Uni-ControlNet - [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
stable-dreamfusion - Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.