Dreambooth-Regularization
Dreambooth-Regularization | SD-Regularization-Images-Style-Dreambooth | |
---|---|---|
3 | 7 | |
2 | 29 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dreambooth-Regularization
- Comic Diffusion V2. This is a culmination of everything worked towards so far. Trained on 6 styles at the same time, mix and match any number of them to create multiple different unique and consistent styles.
- 2D Illustration Styles are scarce on Stable Diffusion so i created a dreambooth model inspired by Hollie Mengert's work
- Hello, i saw that you can train dreambooth for a style, I tried taring dreambooth on vast.ai for a children book illustration style but I got pretty awful result, any ideas what went wrong.
SD-Regularization-Images-Style-Dreambooth
- Comic Diffusion V2. This is a culmination of everything worked towards so far. Trained on 6 styles at the same time, mix and match any number of them to create multiple different unique and consistent styles.
-
Question about training styles
I'm using the Joe Penna's repo on runpod and using only 20 training images and 1700 reg images from https://github.com/aitrepreneur/SD-Regularization-Images-Style-Dreambooth to trin styles. I'm getting very good results.
-
Classic Disney animation dreambooth model
I'm new to using dreambooth, but I followed the steps in some of the recent trending examples to make a "classic disney" art style. I pulled/cropped/reframed about 50 reference images, and used the style examples [from here](https://github.com/aitrepreneur/SD-Regularization-Images-Style-Dreambooth), trained with 6400 steps. Colors are typically oversaturated, and it's really hard to control. I've also found that adding artists helps balance the composition out a lot. Here are some of the sample outputs!
-
Fine-tuned the model on Kurzgesagt videos with DreamBooth. Here are some results.
I've used this repository for regularization images. And these options for training: --class_word "style" --token "kurzgesagt"
- 2D Illustration Styles are scarce on Stable Diffusion so i created a dreambooth model inspired by Hollie Mengert's work
- Hello, i saw that you can train dreambooth for a style, I tried taring dreambooth on vast.ai for a children book illustration style but I got pretty awful result, any ideas what went wrong.
-
I've further refined my Studio Ghilbi Model
I used around 20,000 steps (I forgot to look at number of steps when I stopped training). The regulation images I used can be obtained at https://github.com/aitrepreneur/SD-Regularization-Images-Style-Dreambooth
What are some alternatives?
Stable-Diffusion-Regularization-Images - For use with fine-tuning, especially the current implementation of "Dreambooth".
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (tweaks focused on training faces)
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
huggingface_hub - The official Python client for the Huggingface Hub.
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
Txt2Vectorgraphics - Custom Script for Automatics1111 StableDiffusion-WebUI.