dreambooth-training-guide
StableTuner
dreambooth-training-guide | StableTuner | |
---|---|---|
30 | 22 | |
595 | 626 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | about 1 year ago | |
Python | ||
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dreambooth-training-guide
- [Sdforall] L'extension Dreambooth pour Automatic111 est sortie
-
Creating own model like the ones on civitai.com
I dont have the time right now, but the rule of thumb for me was 80 unet learning steps for 1 image. Atleast 40 regularization images. Read more about regularization images here: https://github.com/nitrosocke/dreambooth-training-guide
-
Image background for LORA training images
This tutorial for dreambooth training has advice with regard to backgrounds which is probably also applicable to LORA. It recommends including images with solid, non-transparent backgrounds but not using them exclusively. Images that focus on the torso and face are probably most important unless your subject has very distinctive legs and feet. Removing other subjects is a must if you're training for a specific subject.
-
Non-technical tips for ideal training of Stable Diffusion through Dreambooth?
I found this, I'm going to go through this guide. Seems like I am using far too many images. https://github.com/nitrosocke/dreambooth-training-guide
-
Questions about Regularization Images to be used in Dreambooth
Nitrosocke's guide already tells how much and what kind of images to use.
-
What’s going to be a problem 20 years from now that people are choosing to ignore?
Dreambooth lets you do it in less than 100 images. https://github.com/nitrosocke/dreambooth-training-guide These folks say it's 5-15 to train on a person but I've not tested myself. https://www.reddit.com/r/StableDiffusion/comments/10tqy88/were\_launching\_a\_lightningfast\_dreambooth\_service/
-
We’re launching a lightning-fast Dreambooth service: finetune 1’500 steps in 5min!
See eg this tutorial for styles: https://github.com/nitrosocke/dreambooth-training-guide
- Would it be possible to pretrain generation to mimic my art style?
- Dreambooth model training : dataset labelling
-
Introducing Macro Diffusion - A model fine-tuned on over 700 macro images (Link in the comments)
The first time I tried to Dreambooth a style it went poorly. Then I found Nitrosocke's Dreambooth Training Guide and realized my problems were caused by a poorly redacted dataset.
StableTuner
- What is the best way to train a Stable Diffusion model on a huge dataset?
- How to fine-tune a Stable Diffusion model with hundreds or thousands of images?
- SD fine-tuning methods compared: a benchmark
-
After so many errors with Dreambooth, Everydream2 is the way to go
Of all dreamboothing/finetuning implementations I tried I liked StableTuner the most. Might be worth giving it a shot to compare as well.
-
Non-technical tips for ideal training of Stable Diffusion through Dreambooth?
Largest I've gone is about 100 images for objects or people. I don't think it matters though, it can be a hassle setting up and resuming the training session each time if your doing small sessions. Stable Tuner can simplify all of this by helping you set everything up through their client installed locally. You can then easily do your training locally in short sessions or have it automatically packed up to be exported to colab or another gpu hosting service, also with the ability to train in short sessions. Its a smart way to manage large training projects like yours. It requires a bit of time setting up but most folks who have already played around with dreambooth should be able to navigate their way through easily enough. It has all the other training methods built into it too, including proper fine tuning https://github.com/devilismyfriend/StableTuner
-
Alternative tools to fine tune stable diffusion models?
Some people also like StableTuner: https://github.com/devilismyfriend/StableTuner
- Question about specific character training
-
Finetuning Inpainting model
Stable Tuner seems like it's setup to allow training on regular/inpaint/depth models. https://github.com/devilismyfriend/StableTuner
-
The next best alternative to Auto1111??
StableTuner is an alternative to the sd_dreambooth plugin. It can do Dreambooth and Fine Tuning (I haven't tried this but I think it's embeddings) It uses diffusers but will convert between that and ckpt files, is for Windows/Nvidia, and uses a local app instead of a webapp. This is the only successful local Dreambooth I've done. You'll need to go to their discord for help but it's not hard to use.
-
Auto1111 Fork with pix2pix
Dreambooth is has better results in older commits. StableTuner is better for training : https://github.com/devilismyfriend/StableTuner
What are some alternatives?
sd_dreambooth_extension
EveryDream2trainer
stable-diffusion-webui - Stable Diffusion web UI
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
dreambooth-gui
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
EveryDream-trainer - General fine tuning for Stable Diffusion
DiffusionToolkit - Metadata-indexer and Viewer for AI-generated images
stable-diffusion-webui - Stable Diffusion web UI