Dreambooth-SD-optimized
kohya_ss
Dreambooth-SD-optimized | kohya_ss | |
---|---|---|
26 | 132 | |
341 | 8,306 | |
- | - | |
1.8 | 9.9 | |
over 1 year ago | 1 day ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dreambooth-SD-optimized
-
Rtx 4070 Ti which dreambooth could fit
Hey guys im quite new to dream booth i tried the one in stabel diffusion but wasnt really satisfied with the output. Im lloking for a external dreambooth that could be started with anaconda but doesnt need 24 gb Vram i only have 12 Gb. I tried gammagec / Dreambooth-SD-optimizedbut he says you need at least 24 GB
- Best Local SD/Dream Both Combination For Those With 24GB Cards
-
Update 1.7.0 of my Windows SD GUI is out! Supports VAE selection, prompt wildcards, even easier DreamBooth training, and tons of quality-of-life improvements. Details in comments.
Using this GitHub https://github.com/gammagec/Dreambooth-SD-optimized from this guide https://pastebin.com/xcFpp9Mr
-
Questions about training parameters.
I had pretty good results with 20 images of myself, 200 reg images and 6000 step using https://github.com/gammagec/Dreambooth-SD-optimized.
- [Dreambooth] I changed something about the way Dreambooth training works. Tell me what you think, please.
-
First full music video with Deforum 0.5 (single render)
I use Automatic1111 for SD and then Dreambooth Optimized https://github.com/gammagec/Dreambooth-SD-optimized to do custom models.
-
How to increase the value of the num_workers?
Gammagec Dreambooth-SD-optimized - https://github.com/gammagec/Dreambooth-SD-optimized
- [Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
-
Looking to replicate these kind of effects in stable diffusion. Anyone know what prompts/techniques would be involved? I'd guess they used img2img + ebsynth?
I use this dreambooth repo to train the SD model. https://github.com/gammagec/Dreambooth-SD-optimized Here's the video that shows how to install it in very good detail. https://youtu.be/TwhqmkzdH3s He uses it to train in a face but you can use it to train in a style as well. I suggest taking about 15 or 20 detailed frames of the video and training it in as a style for the class name. You'll have to experiment with how many training steps to take. I suggest doing 1,000 steps at a time and testing out the model. Also don't leave the default "sks", the researchers forgot that that's a common acronym if you know what I mean. Do something like my_style1 so the model doesn't get confused with something else.
-
Fewer steps produce more clear images
I'm using this guide: https://www.reddit.com/r/StableDiffusion/comments/xpoexy/yet_another_dreambooth_post_how_to_train_an_image/ to train locally with this repo https://github.com/gammagec/Dreambooth-SD-optimized on windows. Needs a 24gb card though.
kohya_ss
-
Some semi-advanced LoRA & kohya_ss questions
Many of the options are explained here https://github.com/bmaltais/kohya_ss/wiki/LoRA-training-parameters
-
Lora training with Kohya issue
training in BF16 might solve this issue from what I saw in this ticket. I know other people ran into the issue too https://github.com/bmaltais/kohya_ss/issues/1382
-
What is the best way to merge multiple loras in to one model?
for lycoris loras you can use the command-line script from the kohya-ss repo https://github.com/bmaltais/kohya_ss/blob/master/networks/merge_lora.py i have an older version checked out from late july, it had a separate merge_lycoris.py for for this purpose, it's probably unified now in a single file
- Evidence that LoRA extraction in Kohya is broken?
-
Merging Lora with Checkpoint Model?
I usually do that with kohya_ss, a tool made for making LoRAs and finetunes. It might be a bit of a pain to set up just to do this one task, but if nobody gives you an easier method, look into it. https://github.com/bmaltais/kohya_ss
-
How I got Kohya_SS working on Arch Linux, including an up-to-date pip requirements file
After that, make your staging directory, and do the git clone https://github.com/bmaltais/kohya_ss.git, and navigate inside of it. Now, here's where things can become a pain. I used pyenv to set my system level python to 3.10.6 with pyenv global 3.10.6, though you can probably just use "local" and do it for the current shell. You NEED it to be active however before you set up your venv. If you do python --version and get 3.10.6, you're ready for this next part. Make your venv with python -m venv venv. This is the simplest way, it'll create a virtual environment in your current folder named venv. You'll do a source venv/bin/activate and then do which python to make sure it's using the python from the venv. Now for the fun part. The included setup scripts have been flaky for me, so I just went through the requirements and installed everything by hand. I'm going to do this guide right now for nvidia, because I just got a 4090 for this stuff. If this ends up working well for others and there's demand, I'll try to reproduce this for AMD (But I'll be honest, I got an nvidia card because bitsandbytes doesn't have full rocm support, nor do most libraries, so it's not very reliable). After installing everything and testing it works at least at a basic level for dreambooth training, my finished requirements.txt for pip is as below:
-
The best open source LoRA model training tools
Earlier I created a post where I asked for recommendations for LoRA model training tutorials. The first one I looked at used the kohya_ss GUI. That GitHub repo already has two tutorials, which are quite good, so I ended up using those:
-
Script does...nothing
I have tried my best to research this issue and have not come up with much. It is obvious that its a backend issue right? The guides that I used https://github.com/bmaltais/kohya_ss and https://github.com/pyenv-win/pyenv-win/
- Using LoRa on SDXL 1.0 (not using the Kohra GUIs)
-
How do I reduce the size of my Lora models?
I am training on a 12GB 3060 using kohya_ss. Is there a setting or something I'm missing?
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
sd_dreambooth_extension
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
EveryDream-trainer - General fine tuning for Stable Diffusion
Stable-Diffusion-Regularization-Images - For use with fine-tuning, especially the current implementation of "Dreambooth".
sd-scripts
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
kohya_ss_colab - a (successful) attepmt to port kohya_ss to colab
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy