EveryDream-trainer
General fine tuning for Stable Diffusion (by victorchall)
Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (by XavierXiao)
EveryDream-trainer | Dreambooth-Stable-Diffusion | |
---|---|---|
32 | 47 | |
501 | 7,383 | |
- | - | |
2.4 | 0.0 | |
about 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EveryDream-trainer
Posts with mentions or reviews of EveryDream-trainer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-10.
- How should I train Dreambooth to understand a new class?
- SDTools v1.5
-
Guide on finetuning a model with mid-sized dataset of family pictures
https://github.com/victorchall/EveryDream-trainer Haven't tried it myself.
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
In terms of training, you've got some really good links and comments to youtube tutorials, but if you're interested in more information about finetuning a model (as opposed to training from scratch), this is a good repo that has a lot of tools for finetuning, including an auto-captioner using BLIP and automatic file renaming. This is the actual finetuning repo.
-
Alternative tools to fine tune stable diffusion models?
Every Dream Trainer: Is basically a Dreambooth combine with Fine Tunning, so you can train multiples thing and a lot images: https://github.com/victorchall/EveryDream-trainer
-
Training with Dreambooth Models and/or Training with Automatic 1111 Textural Inversion
If you have the GPU for it, I'd recommend training all three things at once with (for example) https://github.com/victorchall/EveryDream-trainer. It recommends using "ground truth" training images - i.e. images from LAION-5B, which Stable Diffusion was originally trained with to have better prior preservation (retaining the flexibility of the original model) while incorporating new concepts, potentially even several different concepts in a single training run.
-
Flexible-Diffusion. My first experiment with finetuning. A broad model with better general aesthetics and coherence for different styles! Scroll for 1.5 vs FlexibleDiffusion grids. (BTW, PublicPrompts.art is back!!!)
I used about 300 captioned images (mainly beautiful MJ stuff), and used https://github.com/victorchall/EveryDream-trainer on RunPod for finetuning
- What do you think is the right dataset size to train/refine on dreambooth?
-
Practice your christmas cookies before you bake with this SD 1.5 model
SD 1.5 512x512 model for making christmas style cookies of whatever you'd like. trained on 30 512x512 images with manual captions in everydream
-
Guide for train/finetune with different image sizes, not dreambooth
This is the good one: https://github.com/victorchall/EveryDream-trainer
Dreambooth-Stable-Diffusion
Posts with mentions or reviews of Dreambooth-Stable-Diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-21.
- Where can I train my own LoRA?
-
I am having an error with ControlNet (RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`)
I did search online for an answer, but I am a PC noob, I didn't know what to do when I found this solution in this link: https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/113
- True to life photorealism v2
-
How can to create a custom image generation model?
Do you know some projects or guided tutorials that could help me? How many drawings with the desired style I should then have to give to train the AI model? I found Dreambooth on Stable Diffusion but it seams to be for another use case.
-
How to Make Your Own Anime (Linux/Mac Tutorial follow along)
This seems to be an issue with the code and or the environment itself. There is an open bug for this where some suggestions are p provided by others on how to fix. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/47
-
AI generated portraits of Myself as different classes: Looking for opinion!
Could you provide some more detail on how this works? Did you just use this GitHub repository or did you put together your own implementation?
- Looking for an AI model to transform a video of me (full body) into an animated avatar. Does something like this exist?
-
Ray Liotta as Tommy Vercetti from GTA Vice City
I think the best way to do this would be to train Dreambooth on a number of photos of Ray Liotta first, and use Stable Diffusion instead. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion
-
Luddites don't have a issue with AI, just that it "steals" from them (it doesn't). But they also have a issue with using your own child's drawings as a reference.
Dreambooth. There are other ways, but that is the gold standard. It takes even more Vram than regular stable diffusion, so if you don't have a very beefy card (e.g. 4090 with 25 GB VRAM) various websites let you do it onlin for a small fee. You then download a new model that has all the old stuff (e.g.the 4 gigabyte SD 1.5 file) plus your new images. Like I said, there are other ways that are easier, but when people show great results they are usually talking about Dreambooth.
-
Bunch of misinformation being spread in this thread
THE CODE (unofficial implementation, for the exact wording stating how little images you need read the paper) is designed with extremely little data in mind. I don't know how else to phrase it dude, do you think the training is a magic black box that runs with snail neurons? If you train a dreambooth model the jupyter ide makes calls to python files, those are the files. That is the code
What are some alternatives?
When comparing EveryDream-trainer and Dreambooth-Stable-Diffusion you can also consider the following projects:
kohya_ss
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
StableTuner - Finetuning SD in style.
stable-diffusion-webui - Stable Diffusion web UI
EveryDream - Advanced fine tuning tools for vision models
SHARK - SHARK - High Performance Machine Learning Distribution
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
EveryDream2trainer
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
EveryDream-trainer vs kohya_ss
Dreambooth-Stable-Diffusion vs xformers
EveryDream-trainer vs StableTuner
Dreambooth-Stable-Diffusion vs stable-diffusion-webui
EveryDream-trainer vs EveryDream
Dreambooth-Stable-Diffusion vs SHARK
EveryDream-trainer vs kohya-trainer
Dreambooth-Stable-Diffusion vs stable-diffusion
EveryDream-trainer vs EveryDream2trainer
Dreambooth-Stable-Diffusion vs StableTuner
EveryDream-trainer vs stable-diffusion-webui
Dreambooth-Stable-Diffusion vs Dreambooth-SD-optimized