ControlNet-v1-1-nightly
style2paints
ControlNet-v1-1-nightly | style2paints | |
---|---|---|
31 | 24 | |
4,349 | 17,786 | |
- | - | |
8.4 | 0.0 | |
6 months ago | 10 months ago | |
Python | JavaScript | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ControlNet-v1-1-nightly
-
Making a ControlNet inpaint for sdxl
1- https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/89
-
AI Yearbook Photos Workflow with Stable Diffusion 1.5 Automatic1111
Install ControlNet and download the models you want to use (canny/depth/openpose should be enough for this): https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
can you downgrade Controlnet?
you can find the previous version on their git and if it's a previous version of v1.1 then you probably have to search for the right branch on the new git version and download that
- Could you help me with this problem?
- Controlnet v1.1 Lineart
- Request for current ControlNet information
-
AI conceptual massing iterations within a context image with input control sketch
Stable Diffusion: https://huggingface.co/runwayml/stable-diffusion-v1-5 with ControlNet extension: https://github.com/lllyasviel/ControlNet-v1-1-nightly running on Automatic1111 web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Inpaint Anything (uses "Segment Anything") - Cool A1111 Extension not (yet) on the in App list
-
Architectural design using Stable Diffusion and ControlNet
Sure thing, after testing midjourney a bit I found out that yhe quality of images produced is best but you have zero control on over what is produced. The big breakthrough here is ControlNet which is a Stable Diffusion extension that makes you control the initial noise based on image inputs (or at least this is what i understand) more on it here: https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
Setting Removed from ControlNET - "Skip img2img processing when using img2img initial image" - why?
https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/61 it seems get removed as duplicated.
style2paints
-
ControlNet v1.1 has been released
This is lineart, the sketch model is still not here: https://github.com/lllyasviel/style2paints/tree/master/V5_preview
- Help me gather use cases for creative and interactive uses of AI art
-
Controlnet allows me to color my drawings using a model trained on my own color drawings… Neat!
Older versions of style2paints have been available for years: https://github.com/lllyasviel/style2paints .
-
Is there any way to colorize black and white images in stable diffusion?
Link, for anyone interested.
-
Using SD in concept art workflow
this could help with more efficiency : https://github.com/lllyasviel/style2paints
-
"Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization", Cho et al 2022 {Kaist} (anime colorizer that requests color annotations)
Basically this
-
I decided to use an AI to color one of my favorite pages from chapter 168! Here are a few of the results:
For anyone curious, the AI I used to create the images above is "style2paints V4.5" and can be found in this repo. All credit goes to lllyasviel ;)
-
I ran a Few Illustrations through a Deep Learning AI and Photoshop and It's Quite Impressive.
Step 1: Use Style2Paints to generate a bunch of shades of Illustration you want to paint.
-
I made AI to color manga panels
u/PrizeAcanthisitta228 and others who want to give this a shot, consider also checking out Style2Paints https://github.com/lllyasviel/style2paints which uses AI and user input to add colours based on where you choose to put colours!
-
Style2paints, an AI driven lineart colorization tool
Yes, after Alice from Alice in wonderland, there’s a few manga guys.
Later, this old man not anime style is presented: https://github.com/lllyasviel/style2paints/raw/master/temps/...
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
ControlNet - Let us control diffusion models!
T2I-Adapter - T2I-Adapter
sd-webui-reactor - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
ControlNet-v1-1-nightly-colab - controlnet v1.1 colab
stable-dreamfusion - Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
sd-webui-inpaint-anything - Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
RobustVideoMatting - Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
rembg - Rembg is a tool to remove images background