diffusers
Dreambooth-SD-optimized
diffusers | Dreambooth-SD-optimized | |
---|---|---|
105 | 26 | |
1,889 | 338 | |
0.1% | - | |
7.0 | 1.8 | |
almost 2 years ago | over 2 years ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
diffusers
-
Useful Links
ShivamShrirao's Diffusers Pretrained diffusion models across multiple modalities.
-
DreamBooth fine-tuning failing to get the style
Like the title say I'm trying to fine-tune a model to match a style of a popular manhwa. I'm using the ShivamShrirao Google Colab to accomplish this.
-
How to resume Dreambooth training?
I am running the DreamBooth_Stable_Diffusion.ipynb notebook from ShivamShrirao locally on my machine. Let's say I have trained for 500 iterations and it hasn't converged yet. How do I make it resume training from that iteration so it can do another 500?
-
Non web-ui colab
My understanding, based on messages from an (alleged) representative of colabs, is that the webui is the problem, not SD itself. This also seems to be the consensus in the comments section of other posts. I have not yet seen a link to colab based webui alternatives so here is something I found from a tutorial. I am certain that there are better alternatives. Anyone have a better idea? This will still probably be useful to other people like me who are just messing around.
- [Stablediffusion] Guide pour DreamBooth avec 8 Go de vram sous Windows
-
Finally got Dreambooth running without errors... but is it even using the model I trained?
I'm running ShivamShrirao's fork of diffusers; ran into a fp16 issue and had to patch in a fix from the main branch ( #1567 ).
-
Shivam Stable Diffusion: Getting same example models repeatedly (SD + Dreambooth)
I am running Shivam Stable Diffusion Jupyter notebook: diffusers/DreamBooth_Stable_Diffusion.ipynb at main · ShivamShrirao/diffusers · GitHub.
- Running Stable Diffusion locally with personalized changes
- Can't create embedding's with dreambooth ckpt
-
Weird issue using Shivam's Diffuser notebook
Are you using this one? https://github.com/S
Dreambooth-SD-optimized
-
Rtx 4070 Ti which dreambooth could fit
Hey guys im quite new to dream booth i tried the one in stabel diffusion but wasnt really satisfied with the output. Im lloking for a external dreambooth that could be started with anaconda but doesnt need 24 gb Vram i only have 12 Gb. I tried gammagec / Dreambooth-SD-optimizedbut he says you need at least 24 GB
- Best Local SD/Dream Both Combination For Those With 24GB Cards
-
Update 1.7.0 of my Windows SD GUI is out! Supports VAE selection, prompt wildcards, even easier DreamBooth training, and tons of quality-of-life improvements. Details in comments.
Using this GitHub https://github.com/gammagec/Dreambooth-SD-optimized from this guide https://pastebin.com/xcFpp9Mr
-
Questions about training parameters.
I had pretty good results with 20 images of myself, 200 reg images and 6000 step using https://github.com/gammagec/Dreambooth-SD-optimized.
- [Dreambooth] I changed something about the way Dreambooth training works. Tell me what you think, please.
-
First full music video with Deforum 0.5 (single render)
I use Automatic1111 for SD and then Dreambooth Optimized https://github.com/gammagec/Dreambooth-SD-optimized to do custom models.
-
How to increase the value of the num_workers?
Gammagec Dreambooth-SD-optimized - https://github.com/gammagec/Dreambooth-SD-optimized
- [Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
-
Looking to replicate these kind of effects in stable diffusion. Anyone know what prompts/techniques would be involved? I'd guess they used img2img + ebsynth?
I use this dreambooth repo to train the SD model. https://github.com/gammagec/Dreambooth-SD-optimized Here's the video that shows how to install it in very good detail. https://youtu.be/TwhqmkzdH3s He uses it to train in a face but you can use it to train in a style as well. I suggest taking about 15 or 20 detailed frames of the video and training it in as a style for the class name. You'll have to experiment with how many training steps to take. I suggest doing 1,000 steps at a time and testing out the model. Also don't leave the default "sks", the researchers forgot that that's a common acronym if you know what I mean. Do something like my_style1 so the model doesn't get confused with something else.
-
Fewer steps produce more clear images
I'm using this guide: https://www.reddit.com/r/StableDiffusion/comments/xpoexy/yet_another_dreambooth_post_how_to_train_an_image/ to train locally with this repo https://github.com/gammagec/Dreambooth-SD-optimized on windows. Needs a 24gb card though.
What are some alternatives?
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]