Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles. (by JoePenna)
Dreambooth-SD-optimized
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (by gammagec)
Dreambooth-Stable-Diffusion | Dreambooth-SD-optimized | |
---|---|---|
100 | 26 | |
3,213 | 339 | |
0.4% | 0.3% | |
6.8 | 1.8 | |
over 1 year ago | over 2 years ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dreambooth-Stable-Diffusion
Posts with mentions or reviews of Dreambooth-Stable-Diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
-
Will there be comprehensive tutorials for fine-tuning SD XL when it comes out?
Tons of stuff here, no? https://github.com/JoePenna/Dreambooth-Stable-Diffusion/
-
Useful Links
Joe Penna's Dreambooth (Tutorial|24GB) Most popular DB repo with great results.
-
Dreambooth / Custom Training / Model - what's the state of the art?
1) The https://github.com/JoePenna/Dreambooth-Stable-Diffusion instructions say to use the 1.5 checkpoints - is that the latest? Can I use the 2+ models or?
-
My Experience with Training Real-Person Models: A Summary
I quickly turned to the second library, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, because its readme was very encouraging, and its results were the best. Unfortunately, to use it on Colab, you need to sign up for Colab Pro to use advanced GPUs (at least 24GB of VRAM), and training a model requires at least 14 compute units. As a poor Chinese person, I could only buy Colab Pro from a proxy. The results from JoePenna/Dreambooth-Stable-Diffusion were fantastic, and the preparation was straightforward, requiring only <=20 512*512 photos without writing captions. I used it to create many beautiful photos.
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- training
-
Training a model on Iwanaga Kotoko (from in/spectre), which step do you guys think the model is at its best?
I've found EveryDream to be brilliant and have switched from JoePenna's Dreambooth because I've found I get better results so long as I provide good captions for all the images, even if preparing the dataset takes 3x as long (took me 2 hours to crop and label the 54 images).
-
Dreambooth training results for face, object and style datasets with various prior regularization settings.
From what I know you can train with whatever size you want. But you need software that will support it. For example, ShivamShrirao/diffusers repo seems to allow a change of dimension. Also, you need HW that would support the training, because bigger images need more VRAM, for example,Joe Penna repo is using ~23GB with 512x512px so probably it's not a valid option. But the ShivamShrirao repo has optimizations that allow to run it with less VRAM.
-
Starting to get quite good results with Dreambooth. What do you think? (Follow @RokStrnisa on Twitter for more.)
This is a good starting place: https://github.com/JoePenna/Dreambooth-Stable-Diffusion
- I'm a N00b with training stuff. Trying to get runpod with Dreambooth training some images (80 total) and I'm getting this error. Help?
Dreambooth-SD-optimized
Posts with mentions or reviews of Dreambooth-SD-optimized.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-03.
-
Rtx 4070 Ti which dreambooth could fit
Hey guys im quite new to dream booth i tried the one in stabel diffusion but wasnt really satisfied with the output. Im lloking for a external dreambooth that could be started with anaconda but doesnt need 24 gb Vram i only have 12 Gb. I tried gammagec / Dreambooth-SD-optimizedbut he says you need at least 24 GB
- Best Local SD/Dream Both Combination For Those With 24GB Cards
-
Update 1.7.0 of my Windows SD GUI is out! Supports VAE selection, prompt wildcards, even easier DreamBooth training, and tons of quality-of-life improvements. Details in comments.
Using this GitHub https://github.com/gammagec/Dreambooth-SD-optimized from this guide https://pastebin.com/xcFpp9Mr
-
Questions about training parameters.
I had pretty good results with 20 images of myself, 200 reg images and 6000 step using https://github.com/gammagec/Dreambooth-SD-optimized.
- [Dreambooth] I changed something about the way Dreambooth training works. Tell me what you think, please.
-
First full music video with Deforum 0.5 (single render)
I use Automatic1111 for SD and then Dreambooth Optimized https://github.com/gammagec/Dreambooth-SD-optimized to do custom models.
-
How to increase the value of the num_workers?
Gammagec Dreambooth-SD-optimized - https://github.com/gammagec/Dreambooth-SD-optimized
- [Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
-
Looking to replicate these kind of effects in stable diffusion. Anyone know what prompts/techniques would be involved? I'd guess they used img2img + ebsynth?
I use this dreambooth repo to train the SD model. https://github.com/gammagec/Dreambooth-SD-optimized Here's the video that shows how to install it in very good detail. https://youtu.be/TwhqmkzdH3s He uses it to train in a face but you can use it to train in a style as well. I suggest taking about 15 or 20 detailed frames of the video and training it in as a style for the class name. You'll have to experiment with how many training steps to take. I suggest doing 1,000 steps at a time and testing out the model. Also don't leave the default "sks", the researchers forgot that that's a common acronym if you know what I mean. Do something like my_style1 so the model doesn't get confused with something else.
-
Fewer steps produce more clear images
I'm using this guide: https://www.reddit.com/r/StableDiffusion/comments/xpoexy/yet_another_dreambooth_post_how_to_train_an_image/ to train locally with this repo https://github.com/gammagec/Dreambooth-SD-optimized on windows. Needs a 24gb card though.
What are some alternatives?
When comparing Dreambooth-Stable-Diffusion and Dreambooth-SD-optimized you can also consider the following projects:
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui - Stable Diffusion web UI
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Stable-Diffusion-Regularization-Images - For use with fine-tuning, especially the current implementation of "Dreambooth".