T2I-Adapter
style2paints
T2I-Adapter | style2paints | |
---|---|---|
25 | 24 | |
3,158 | 17,759 | |
2.9% | - | |
7.9 | 0.0 | |
6 months ago | 9 months ago | |
Python | JavaScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
T2I-Adapter
-
Help me understand ControlNet vs T2I-adapter vs CoAdapter
I've found some documentation here https://github.com/TencentARC/T2I-Adapter/blob/SD/docs/coadapter.md
-
Color-Diffusion: using diffusion models to colorize black and white images
Yeah, if you have a high res image, you can get color info at super low-res and then regenerate the colors at high res with another model. (though this isn't an efficient approach at all)
https://github.com/TencentARC/T2I-Adapter
i've also seen a controlnet do this.
- Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
-
Reflected Diffusion Models
https://github.com/TencentARC/T2I-Adapter
It works with the Mikubill ControlNet plugin for A1111.
- Is it possible to replace objects with an already segmented image by ControlNet?
-
ControlNet v1.1 has been released
These are from Tencent: https://github.com/TencentARC/T2I-Adapter
-
Can someone explain some of these newer controlnet models and preprocessors? Clipvision? Color? Pidinet? Binary?
I think they're for T2I-adapter models, which can be downloaded here.
- T2I-Adapter: Text-to-Image Models with Unprecedented Control
-
How do I combine two images using AUTOMATIC1111?
Apart from Controlnet, T2I Adapter works quite well for this. https://github.com/TencentARC/T2I-Adapter
- T2IAdapter creates Coadapter(inspired by Composer)
style2paints
-
ControlNet v1.1 has been released
This is lineart, the sketch model is still not here: https://github.com/lllyasviel/style2paints/tree/master/V5_preview
- Help me gather use cases for creative and interactive uses of AI art
-
Controlnet allows me to color my drawings using a model trained on my own color drawings… Neat!
Older versions of style2paints have been available for years: https://github.com/lllyasviel/style2paints .
-
Is there any way to colorize black and white images in stable diffusion?
Link, for anyone interested.
-
Using SD in concept art workflow
this could help with more efficiency : https://github.com/lllyasviel/style2paints
-
"Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization", Cho et al 2022 {Kaist} (anime colorizer that requests color annotations)
Basically this
-
I decided to use an AI to color one of my favorite pages from chapter 168! Here are a few of the results:
For anyone curious, the AI I used to create the images above is "style2paints V4.5" and can be found in this repo. All credit goes to lllyasviel ;)
-
I ran a Few Illustrations through a Deep Learning AI and Photoshop and It's Quite Impressive.
Step 1: Use Style2Paints to generate a bunch of shades of Illustration you want to paint.
-
I made AI to color manga panels
u/PrizeAcanthisitta228 and others who want to give this a shot, consider also checking out Style2Paints https://github.com/lllyasviel/style2paints which uses AI and user input to add colours based on where you choose to put colours!
-
Style2paints, an AI driven lineart colorization tool
Yes, after Alice from Alice in wonderland, there’s a few manga guys.
Later, this old man not anime style is presented: https://github.com/lllyasviel/style2paints/raw/master/temps/...
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
ControlNet - Let us control diffusion models!
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-dreamfusion - Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
Color-diffusion - A diffusion model to colorize black and white images
ControlNet-v1-1-nightly - Nightly release of ControlNet 1.1
Latent-Paint-Mesh - NVDiffrast based implementation of Latent-Paint
SPADE - Semantic Image Synthesis with SPADE
Uni-ControlNet - [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models