Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (by XavierXiao)
Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles. (by JoePenna)
Dreambooth-Stable-Diffusion | Dreambooth-Stable-Diffusion | |
---|---|---|
47 | 100 | |
7,667 | 3,200 | |
0.6% | - | |
0.0 | 6.8 | |
over 2 years ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dreambooth-Stable-Diffusion
Posts with mentions or reviews of Dreambooth-Stable-Diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-21.
- Where can I train my own LoRA?
-
I am having an error with ControlNet (RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`)
I did search online for an answer, but I am a PC noob, I didn't know what to do when I found this solution in this link: https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/113
- True to life photorealism v2
-
How can to create a custom image generation model?
Do you know some projects or guided tutorials that could help me? How many drawings with the desired style I should then have to give to train the AI model? I found Dreambooth on Stable Diffusion but it seams to be for another use case.
-
How to Make Your Own Anime (Linux/Mac Tutorial follow along)
This seems to be an issue with the code and or the environment itself. There is an open bug for this where some suggestions are p provided by others on how to fix. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/47
-
AI generated portraits of Myself as different classes: Looking for opinion!
Could you provide some more detail on how this works? Did you just use this GitHub repository or did you put together your own implementation?
- Looking for an AI model to transform a video of me (full body) into an animated avatar. Does something like this exist?
-
Ray Liotta as Tommy Vercetti from GTA Vice City
I think the best way to do this would be to train Dreambooth on a number of photos of Ray Liotta first, and use Stable Diffusion instead. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion
-
Luddites don't have a issue with AI, just that it "steals" from them (it doesn't). But they also have a issue with using your own child's drawings as a reference.
Dreambooth. There are other ways, but that is the gold standard. It takes even more Vram than regular stable diffusion, so if you don't have a very beefy card (e.g. 4090 with 25 GB VRAM) various websites let you do it onlin for a small fee. You then download a new model that has all the old stuff (e.g.the 4 gigabyte SD 1.5 file) plus your new images. Like I said, there are other ways that are easier, but when people show great results they are usually talking about Dreambooth.
-
Bunch of misinformation being spread in this thread
THE CODE (unofficial implementation, for the exact wording stating how little images you need read the paper) is designed with extremely little data in mind. I don't know how else to phrase it dude, do you think the training is a magic black box that runs with snail neurons? If you train a dreambooth model the jupyter ide makes calls to python files, those are the files. That is the code
Dreambooth-Stable-Diffusion
Posts with mentions or reviews of Dreambooth-Stable-Diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
-
Will there be comprehensive tutorials for fine-tuning SD XL when it comes out?
Tons of stuff here, no? https://github.com/JoePenna/Dreambooth-Stable-Diffusion/
-
Useful Links
Joe Penna's Dreambooth (Tutorial|24GB) Most popular DB repo with great results.
-
Dreambooth / Custom Training / Model - what's the state of the art?
1) The https://github.com/JoePenna/Dreambooth-Stable-Diffusion instructions say to use the 1.5 checkpoints - is that the latest? Can I use the 2+ models or?
-
My Experience with Training Real-Person Models: A Summary
I quickly turned to the second library, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, because its readme was very encouraging, and its results were the best. Unfortunately, to use it on Colab, you need to sign up for Colab Pro to use advanced GPUs (at least 24GB of VRAM), and training a model requires at least 14 compute units. As a poor Chinese person, I could only buy Colab Pro from a proxy. The results from JoePenna/Dreambooth-Stable-Diffusion were fantastic, and the preparation was straightforward, requiring only <=20 512*512 photos without writing captions. I used it to create many beautiful photos.
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- training
-
Training a model on Iwanaga Kotoko (from in/spectre), which step do you guys think the model is at its best?
I've found EveryDream to be brilliant and have switched from JoePenna's Dreambooth because I've found I get better results so long as I provide good captions for all the images, even if preparing the dataset takes 3x as long (took me 2 hours to crop and label the 54 images).
-
Dreambooth training results for face, object and style datasets with various prior regularization settings.
From what I know you can train with whatever size you want. But you need software that will support it. For example, ShivamShrirao/diffusers repo seems to allow a change of dimension. Also, you need HW that would support the training, because bigger images need more VRAM, for example,Joe Penna repo is using ~23GB with 512x512px so probably it's not a valid option. But the ShivamShrirao repo has optimizations that allow to run it with less VRAM.
-
Starting to get quite good results with Dreambooth. What do you think? (Follow @RokStrnisa on Twitter for more.)
This is a good starting place: https://github.com/JoePenna/Dreambooth-Stable-Diffusion
- I'm a N00b with training stuff. Trying to get runpod with Dreambooth training some images (80 total) and I'm getting this error. Help?
What are some alternatives?
When comparing Dreambooth-Stable-Diffusion and Dreambooth-Stable-Diffusion you can also consider the following projects:
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
civitai - A repository of models, textual inversions, and more
SHARK-Studio - SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI