LyCORIS
StableTuner
LyCORIS | StableTuner | |
---|---|---|
13 | 22 | |
1,983 | 626 | |
- | - | |
9.6 | 10.0 | |
6 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LyCORIS
-
LoRA (LyCORIS) iA3 is amazing (info in 1st comment)
Lycoris is another implementation of LoRA done by KohakuBlueleaf: https://github.com/KohakuBlueleaf/LyCORIS
-
Training LORAs locally guide in text form?
Most guides focus on LoRa training as that has been around for longer. But I think LoHa can give better results. But the training is about half as fas it/s and it requires different training settings.
-
Guide to DreamBooth / LORA / LyCORIS
I've read in some tutorials that it is best that the value should be 64 or below, also here they suggest to not go over 64 ( https://github.com/KohakuBlueleaf/LyCORIS )
-
LyCORIS doesn't work with inpainting models
Does anyone know how to make LyCORIS models (https://github.com/KohakuBlueleaf/LyCORIS) work with inpainting models?
- wtf is a lycoris?
- I wonder what to do with this?
-
I'm the creator of LoRA. How can I make it better?
I think it was linked already but this is also relevant for LoRa: https://github.com/KohakuBlueleaf/LyCORIS Nice work!
-
LoRA: Low-Rank Adaptation of Large Language Models
There are some WIP evolutions of SD Lora in the works, like locon and lycoris.
https://github.com/KohakuBlueleaf/LyCORIS
- What the hell is a Locon/Loha model?
-
SD fine-tuning methods compared: a benchmark
You might want to expand LoRA to include LoCon and LoHa, (and also add a column for VRAM requirements) (Think of it as a more complete LoRA that works for the kernels in the convolutional units rather than just the weights for the feed-forward network), support is still quite limited, but it's starting to pick up steam https://github.com/KohakuBlueleaf/LyCORIS
StableTuner
- What is the best way to train a Stable Diffusion model on a huge dataset?
- How to fine-tune a Stable Diffusion model with hundreds or thousands of images?
- SD fine-tuning methods compared: a benchmark
-
After so many errors with Dreambooth, Everydream2 is the way to go
Of all dreamboothing/finetuning implementations I tried I liked StableTuner the most. Might be worth giving it a shot to compare as well.
-
Non-technical tips for ideal training of Stable Diffusion through Dreambooth?
Largest I've gone is about 100 images for objects or people. I don't think it matters though, it can be a hassle setting up and resuming the training session each time if your doing small sessions. Stable Tuner can simplify all of this by helping you set everything up through their client installed locally. You can then easily do your training locally in short sessions or have it automatically packed up to be exported to colab or another gpu hosting service, also with the ability to train in short sessions. Its a smart way to manage large training projects like yours. It requires a bit of time setting up but most folks who have already played around with dreambooth should be able to navigate their way through easily enough. It has all the other training methods built into it too, including proper fine tuning https://github.com/devilismyfriend/StableTuner
-
Alternative tools to fine tune stable diffusion models?
Some people also like StableTuner: https://github.com/devilismyfriend/StableTuner
- Question about specific character training
-
Finetuning Inpainting model
Stable Tuner seems like it's setup to allow training on regular/inpaint/depth models. https://github.com/devilismyfriend/StableTuner
-
The next best alternative to Auto1111??
StableTuner is an alternative to the sd_dreambooth plugin. It can do Dreambooth and Fine Tuning (I haven't tried this but I think it's embeddings) It uses diffusers but will convert between that and ckpt files, is for Windows/Nvidia, and uses a local app instead of a webapp. This is the only successful local Dreambooth I've done. You'll need to go to their discord for help but it's not hard to use.
-
Auto1111 Fork with pix2pix
Dreambooth is has better results in older commits. StableTuner is better for training : https://github.com/devilismyfriend/StableTuner
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
EveryDream2trainer
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-webui-additional-networks
EveryDream-trainer - General fine tuning for Stable Diffusion
kohya_ss
dreambooth-training-guide
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
stable-diffusion-webui - Stable Diffusion web UI