Dreambooth-Stable-Diffusion
Dreambooth-SD-optimized
Dreambooth-Stable-Diffusion | Dreambooth-SD-optimized | |
---|---|---|
47 | 26 | |
7,383 | 341 | |
- | - | |
0.0 | 1.8 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dreambooth-Stable-Diffusion
- Where can I train my own LoRA?
-
I am having an error with ControlNet (RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`)
I did search online for an answer, but I am a PC noob, I didn't know what to do when I found this solution in this link: https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/113
- True to life photorealism v2
-
How can to create a custom image generation model?
Do you know some projects or guided tutorials that could help me? How many drawings with the desired style I should then have to give to train the AI model? I found Dreambooth on Stable Diffusion but it seams to be for another use case.
-
How to Make Your Own Anime (Linux/Mac Tutorial follow along)
This seems to be an issue with the code and or the environment itself. There is an open bug for this where some suggestions are p provided by others on how to fix. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/47
-
AI generated portraits of Myself as different classes: Looking for opinion!
Could you provide some more detail on how this works? Did you just use this GitHub repository or did you put together your own implementation?
- Looking for an AI model to transform a video of me (full body) into an animated avatar. Does something like this exist?
-
Ray Liotta as Tommy Vercetti from GTA Vice City
I think the best way to do this would be to train Dreambooth on a number of photos of Ray Liotta first, and use Stable Diffusion instead. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion
-
Luddites don't have a issue with AI, just that it "steals" from them (it doesn't). But they also have a issue with using your own child's drawings as a reference.
Dreambooth. There are other ways, but that is the gold standard. It takes even more Vram than regular stable diffusion, so if you don't have a very beefy card (e.g. 4090 with 25 GB VRAM) various websites let you do it onlin for a small fee. You then download a new model that has all the old stuff (e.g.the 4 gigabyte SD 1.5 file) plus your new images. Like I said, there are other ways that are easier, but when people show great results they are usually talking about Dreambooth.
-
Bunch of misinformation being spread in this thread
THE CODE (unofficial implementation, for the exact wording stating how little images you need read the paper) is designed with extremely little data in mind. I don't know how else to phrase it dude, do you think the training is a magic black box that runs with snail neurons? If you train a dreambooth model the jupyter ide makes calls to python files, those are the files. That is the code
Dreambooth-SD-optimized
-
Rtx 4070 Ti which dreambooth could fit
Hey guys im quite new to dream booth i tried the one in stabel diffusion but wasnt really satisfied with the output. Im lloking for a external dreambooth that could be started with anaconda but doesnt need 24 gb Vram i only have 12 Gb. I tried gammagec / Dreambooth-SD-optimizedbut he says you need at least 24 GB
- Best Local SD/Dream Both Combination For Those With 24GB Cards
-
Update 1.7.0 of my Windows SD GUI is out! Supports VAE selection, prompt wildcards, even easier DreamBooth training, and tons of quality-of-life improvements. Details in comments.
Using this GitHub https://github.com/gammagec/Dreambooth-SD-optimized from this guide https://pastebin.com/xcFpp9Mr
-
Questions about training parameters.
I had pretty good results with 20 images of myself, 200 reg images and 6000 step using https://github.com/gammagec/Dreambooth-SD-optimized.
- [Dreambooth] I changed something about the way Dreambooth training works. Tell me what you think, please.
-
First full music video with Deforum 0.5 (single render)
I use Automatic1111 for SD and then Dreambooth Optimized https://github.com/gammagec/Dreambooth-SD-optimized to do custom models.
-
How to increase the value of the num_workers?
Gammagec Dreambooth-SD-optimized - https://github.com/gammagec/Dreambooth-SD-optimized
- [Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
-
Looking to replicate these kind of effects in stable diffusion. Anyone know what prompts/techniques would be involved? I'd guess they used img2img + ebsynth?
I use this dreambooth repo to train the SD model. https://github.com/gammagec/Dreambooth-SD-optimized Here's the video that shows how to install it in very good detail. https://youtu.be/TwhqmkzdH3s He uses it to train in a face but you can use it to train in a style as well. I suggest taking about 15 or 20 detailed frames of the video and training it in as a style for the class name. You'll have to experiment with how many training steps to take. I suggest doing 1,000 steps at a time and testing out the model. Also don't leave the default "sks", the researchers forgot that that's a common acronym if you know what I mean. Do something like my_style1 so the model doesn't get confused with something else.
-
Fewer steps produce more clear images
I'm using this guide: https://www.reddit.com/r/StableDiffusion/comments/xpoexy/yet_another_dreambooth_post_how_to_train_an_image/ to train locally with this repo https://github.com/gammagec/Dreambooth-SD-optimized on windows. Needs a 24gb card though.
What are some alternatives?
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
stable-diffusion-webui - Stable Diffusion web UI
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
SHARK - SHARK - High Performance Machine Learning Distribution
Stable-Diffusion-Regularization-Images - For use with fine-tuning, especially the current implementation of "Dreambooth".
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
StableTuner - Finetuning SD in style.
kohya_ss
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
fast-stable-diffusion - fast-stable-diffusion + DreamBooth