kohya_ss
By bmaltais
EveryDream-trainer
General fine tuning for Stable Diffusion (by victorchall)
kohya_ss | EveryDream-trainer | |
---|---|---|
132 | 32 | |
8,414 | 501 | |
- | - | |
9.9 | 2.4 | |
4 days ago | about 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kohya_ss
Posts with mentions or reviews of kohya_ss.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-06.
-
Some semi-advanced LoRA & kohya_ss questions
Many of the options are explained here https://github.com/bmaltais/kohya_ss/wiki/LoRA-training-parameters
-
Lora training with Kohya issue
training in BF16 might solve this issue from what I saw in this ticket. I know other people ran into the issue too https://github.com/bmaltais/kohya_ss/issues/1382
-
What is the best way to merge multiple loras in to one model?
for lycoris loras you can use the command-line script from the kohya-ss repo https://github.com/bmaltais/kohya_ss/blob/master/networks/merge_lora.py i have an older version checked out from late july, it had a separate merge_lycoris.py for for this purpose, it's probably unified now in a single file
- Evidence that LoRA extraction in Kohya is broken?
-
Merging Lora with Checkpoint Model?
I usually do that with kohya_ss, a tool made for making LoRAs and finetunes. It might be a bit of a pain to set up just to do this one task, but if nobody gives you an easier method, look into it. https://github.com/bmaltais/kohya_ss
-
How I got Kohya_SS working on Arch Linux, including an up-to-date pip requirements file
After that, make your staging directory, and do the git clone https://github.com/bmaltais/kohya_ss.git, and navigate inside of it. Now, here's where things can become a pain. I used pyenv to set my system level python to 3.10.6 with pyenv global 3.10.6, though you can probably just use "local" and do it for the current shell. You NEED it to be active however before you set up your venv. If you do python --version and get 3.10.6, you're ready for this next part. Make your venv with python -m venv venv. This is the simplest way, it'll create a virtual environment in your current folder named venv. You'll do a source venv/bin/activate and then do which python to make sure it's using the python from the venv. Now for the fun part. The included setup scripts have been flaky for me, so I just went through the requirements and installed everything by hand. I'm going to do this guide right now for nvidia, because I just got a 4090 for this stuff. If this ends up working well for others and there's demand, I'll try to reproduce this for AMD (But I'll be honest, I got an nvidia card because bitsandbytes doesn't have full rocm support, nor do most libraries, so it's not very reliable). After installing everything and testing it works at least at a basic level for dreambooth training, my finished requirements.txt for pip is as below:
-
The best open source LoRA model training tools
Earlier I created a post where I asked for recommendations for LoRA model training tutorials. The first one I looked at used the kohya_ss GUI. That GitHub repo already has two tutorials, which are quite good, so I ended up using those:
-
Script does...nothing
I have tried my best to research this issue and have not come up with much. It is obvious that its a backend issue right? The guides that I used https://github.com/bmaltais/kohya_ss and https://github.com/pyenv-win/pyenv-win/
- Using LoRa on SDXL 1.0 (not using the Kohra GUIs)
-
How do I reduce the size of my Lora models?
I am training on a 12GB 3060 using kohya_ss. Is there a setting or something I'm missing?
EveryDream-trainer
Posts with mentions or reviews of EveryDream-trainer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-10.
- How should I train Dreambooth to understand a new class?
- SDTools v1.5
-
Guide on finetuning a model with mid-sized dataset of family pictures
https://github.com/victorchall/EveryDream-trainer Haven't tried it myself.
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
In terms of training, you've got some really good links and comments to youtube tutorials, but if you're interested in more information about finetuning a model (as opposed to training from scratch), this is a good repo that has a lot of tools for finetuning, including an auto-captioner using BLIP and automatic file renaming. This is the actual finetuning repo.
-
Alternative tools to fine tune stable diffusion models?
Every Dream Trainer: Is basically a Dreambooth combine with Fine Tunning, so you can train multiples thing and a lot images: https://github.com/victorchall/EveryDream-trainer
-
Training with Dreambooth Models and/or Training with Automatic 1111 Textural Inversion
If you have the GPU for it, I'd recommend training all three things at once with (for example) https://github.com/victorchall/EveryDream-trainer. It recommends using "ground truth" training images - i.e. images from LAION-5B, which Stable Diffusion was originally trained with to have better prior preservation (retaining the flexibility of the original model) while incorporating new concepts, potentially even several different concepts in a single training run.
-
Flexible-Diffusion. My first experiment with finetuning. A broad model with better general aesthetics and coherence for different styles! Scroll for 1.5 vs FlexibleDiffusion grids. (BTW, PublicPrompts.art is back!!!)
I used about 300 captioned images (mainly beautiful MJ stuff), and used https://github.com/victorchall/EveryDream-trainer on RunPod for finetuning
- What do you think is the right dataset size to train/refine on dreambooth?
-
Practice your christmas cookies before you bake with this SD 1.5 model
SD 1.5 512x512 model for making christmas style cookies of whatever you'd like. trained on 30 512x512 images with manual captions in everydream
-
Guide for train/finetune with different image sizes, not dreambooth
This is the good one: https://github.com/victorchall/EveryDream-trainer
What are some alternatives?
When comparing kohya_ss and EveryDream-trainer you can also consider the following projects:
sd_dreambooth_extension
StableTuner - Finetuning SD in style.
sd-scripts
EveryDream - Advanced fine tuning tools for vision models
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
kohya_ss_colab - a (successful) attepmt to port kohya_ss to colab
EveryDream2trainer
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
stable-diffusion-webui - Stable Diffusion web UI
sd-webui-additional-networks
DreamArtist-stable-diffusion - stable diffusion webui with contrastive prompt tuning
kohya_ss vs sd_dreambooth_extension
EveryDream-trainer vs StableTuner
kohya_ss vs sd-scripts
EveryDream-trainer vs EveryDream
kohya_ss vs automatic
EveryDream-trainer vs kohya-trainer
kohya_ss vs kohya_ss_colab
EveryDream-trainer vs EveryDream2trainer
kohya_ss vs LoRA_Easy_Training_Scripts
EveryDream-trainer vs stable-diffusion-webui
kohya_ss vs sd-webui-additional-networks
EveryDream-trainer vs DreamArtist-stable-diffusion