stable-diffusion-reference-only
stable-diffusion-reference-only | stablediffusion-reference-only | |
---|---|---|
3 | 1 | |
114 | - | |
- | - | |
9.1 | - | |
about 2 months ago | - | |
Python | ||
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-reference-only
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
-
Stable Diffusion Reference Only: : Image Prompt and Blueprint Jointly Guided Multi-Condition Diffusion Model for Secondary Painting
Code: https://github.com/aihao2000/stable-diffusion-reference-only
stablediffusion-reference-only
-
Stable Diffusion Reference Only: : Image Prompt and Blueprint Jointly Guided Multi-Condition Diffusion Model for Secondary Painting
Stable Diffusion and ControlNet have achieved excellent results in the field of image generation and synthesis. However, due to the granularity and method of its control, the efficiency improvement is limited for professional artistic creations such as comics and animation production whose main work is secondary painting. In the current workflow, fixing characters and image styles often need lengthy text prompts, and even requires further training through TextualInversion, DreamBooth or other methods, which is very complicated and expensive for painters. Therefore, we present a new method in this paper, Stable Diffusion Reference Only, a images-to-image self-supervised model that uses only two types of conditional images for precise control generation to accelerate secondary painting. The first type of conditional image serves as an image prompt, supplying the necessary conceptual and color information for generation. The second type is blueprint image, which controls the visual structure of the generated image. It is natively embedded into the original UNet, eliminating the need for ControlNet. We released all the code for the module and pipeline, and trained a controllable character line art coloring model at https://github.com/aihao2000/stablediffusion-reference-only, that achieved state-of-the-art results in this field. This verifies the effectiveness of the structure and greatly improves the production efficiency of animations, comics, and fanworks.
What are some alternatives?
deep-learning-v2-pytorch - Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101
sliders - Concept Sliders for Precise Control of Diffusion Models
ziplora-pytorch - Implementation of "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs"
RIVAL - [NeurIPS 2023 Spotlight] Real-World Image Variation by Aligning Diffusion Inversion Chain
DCT-Net - Official implementation of "DCT-Net: Domain-Calibrated Translation for Portrait Stylization", SIGGRAPH 2022 (TOG); Multi-style cartoonization
ComfyUI_experiments - Some experimental custom nodes.
animegan2-pytorch - PyTorch implementation of AnimeGANv2
DemoFusion - Let us democratise high-resolution generation! (CVPR 2024)
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
cartoonify - Deploy and scale serverless machine learning app - in 4 steps.
promptplusplus