Stable-Diffusion-Regularization-Images
Dreambooth-Stable-Diffusion
Stable-Diffusion-Regularization-Images | Dreambooth-Stable-Diffusion | |
---|---|---|
14 | 12 | |
99 | 145 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion-Regularization-Images
-
Clarification regularization for Stable Diffusion
However, when I look at regularization dataset that people have created, a lot of them are composed by bad quality AI generated pictures, for instance disfigured humans, or images full or artifacts. For instance, this image of train, or this one of a woman.
- Training Picture Source
-
💡 How train with locally with 1.5 Runwayml Inpainting Model?
BTW, you can find the regularization images (ready to use class images) here.
-
Regularization images
Have you compared results to using regularization images from an existing repo such as https://github.com/JoePenna/Stable-Diffusion-Regularization-Images
-
Comic Diffusion V2. This is a culmination of everything worked towards so far. Trained on 6 styles at the same time, mix and match any number of them to create multiple different unique and consistent styles.
For subjects/people, paste this into the github downloader https://github.com/JoePenna/Stable-Diffusion-Regularization-Images/tree/main/person_ddim
-
Good Dreambooth Formula
If you are using person, man or woman as class, you don't need to generate the images as there are a some github repos that have a bunch of them already generated for you to use. Nitrosocke also shared some, check my initial post for the link.
- Custom Model Comparison 1.4 vs 1.5 (something broke)
-
What should I do when want better results for a person that was already trained in the sd v1.4 version? Train the model, Dreambooth, or textual inversion embeddings?
I did some experiments with dreambooth training. Overall better results were when I have used 1500 "person" class and about 50 training images. It is vital to have different background and different clothes otherwise it will "bake it" into your token (e.g. same sweater will influence all the rendering with color or pattern as it will be "part of your token"). Now I need to test textual inversion and see the difference.
-
Any advice on how to use the dream booth colab with automatic?
As for what kind of images to use I've tried actual photos of people and images generated with Stable Diffusion and I've had pretty good results with both. I also tried using exclusively pictures of the person I'm training for everything and even that worked pretty well. All I can really say is that it seems to pay off if you keep an eye on the framing of your images - if the majority of your reference images cut off the upper 10% of the head for example then your model will tend to also produce images that cut off the upper 10% of the head. Oh, and I haven't tried it myself but this Github repository apparently has a ton of images specifically for use in DreamBooth.
-
How are you achieving decent results in DreamBooth? My images look terrible!
I've made sure all my images are only me, and clean images. I have tried using the unsplash regularization images from https://github.com/JoePenna/Stable-Diffusion-Regularization-Images. I've tried generating my own images from SD itself. I've tried 1k, 2k, 3k, 4k steps. I've tried more images of myself and fewer. I've tried using "man", "person", "face" as the class. All of it results in absolute garbage. I get outputs that consistently look like I'm 80 years old or a different ethnicity. Or just wrong... so wrong.
Dreambooth-Stable-Diffusion
- DreamBooth Tutorial (using filewords)
-
Comic Diffusion V2. This is a culmination of everything worked towards so far. Trained on 6 styles at the same time, mix and match any number of them to create multiple different unique and consistent styles.
Run !git clone https://github.com/kanewallmann/Dreambooth-Stable-Diffusion.git in a separate notebook to clone kane’s repo.
-
how to train person and (several) styles at the same time?
Unfortunately I tried the kanewallmann's ( https://github.com/kanewallmann/Dreambooth-Stable-Diffusion ) fork without success. I followed the instructions for a person and a style (20+20 images), I created folders with the correct naming convention in training_images (/person/ name_person xxx, style/ name_style xxx) and downloaded the regularization images for person and for style. I tried 3 times with different steps (3000, 4000, 5000) and the training stopped at 70 - 95% and the reason is ''isadirectoryerror: [Errno 21]'' Any idea about this? I ran this on runpod because on
- cyberpunk police in snowy streets using a custom trained model 👮
-
ConceptsDreambooth [MEGATHREAD] Live now! Including the new steps formula! Link to the repo and instructions inside.
We sure can! Somebody actually already figured that one out here. I've had some success with that method, but we're trying to implement a way that has less interference between the tokens and sort out the optimal steps for training in multiples with varying amounts of training images.
-
Understanding Dreambooth Correctly?
there is a dreambooth repo fork that helps with training two tokens. maybe give it a try
- Multiple dreambooth tokens on a single checkpoint file
-
Train Model with 2 persons!?
You have to use this: https://github.com/kanewallmann/Dreambooth-Stable-Diffusion
-
Struggling with an issue producing two separate prompts in Dreambooth - and was hoping someone might have some insight.
This is the only repo I know of that can train more than one subject for Dreambooth: https://github.com/kanewallmann/Dreambooth-Stable-Diffusion
- Dreambooth repo that may allow training of multiple people
What are some alternatives?
SD-Regularization-Images-Style-Dreambooth
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
Dreambooth-Regularization - All the regs
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
ConceptsDreambooth - ConceptsDreambooth
stable-diffusion-webui - Stable Diffusion web UI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.