stable-diffusion-webui
dreambooth-training-guide
stable-diffusion-webui | dreambooth-training-guide | |
---|---|---|
8 | 30 | |
45 | 595 | |
- | - | |
0.0 | 10.0 | |
5 months ago | over 1 year ago | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui
- Can we start a list of Stable Diffusion 2.0 compatible UI's?
-
Trying to squeeze my favorite Wes Anderson-esque portrait style out of 2.0
I'm using a fork of automatic1111 that works with the new ckpt files (still takes some effort to install) https://github.com/MrCheeze/stable-diffusion-webui/tree/sd-2.0
- Started recreating my prompts in SD 2.0!
-
New Release: SD 2.0 Dreambooth model - Future-Diffusion
I used this repo: https://github.com/MrCheeze/stable-diffusion-webui/tree/sd-2.0 to test out v.2.0. I guess it's like automatic repo but with some additional scripts and files (that I do not understand) that allows you to put v.2.0 model inside and try. For the time being I managed to run base 768x768 model. (512x512 base model doesn't work for me).
- Is any way to test SD 2.0 on Colab Free or Automatic 1111?
-
Looks like Stable Diffusion 2.0 was released, with some anticipated features
Somebody put up a fork already.
- Stable Diffusion 2.0 Announcement
dreambooth-training-guide
- [Sdforall] L'extension Dreambooth pour Automatic111 est sortie
-
Creating own model like the ones on civitai.com
I dont have the time right now, but the rule of thumb for me was 80 unet learning steps for 1 image. Atleast 40 regularization images. Read more about regularization images here: https://github.com/nitrosocke/dreambooth-training-guide
-
Image background for LORA training images
This tutorial for dreambooth training has advice with regard to backgrounds which is probably also applicable to LORA. It recommends including images with solid, non-transparent backgrounds but not using them exclusively. Images that focus on the torso and face are probably most important unless your subject has very distinctive legs and feet. Removing other subjects is a must if you're training for a specific subject.
-
Non-technical tips for ideal training of Stable Diffusion through Dreambooth?
I found this, I'm going to go through this guide. Seems like I am using far too many images. https://github.com/nitrosocke/dreambooth-training-guide
-
Questions about Regularization Images to be used in Dreambooth
Nitrosocke's guide already tells how much and what kind of images to use.
-
What’s going to be a problem 20 years from now that people are choosing to ignore?
Dreambooth lets you do it in less than 100 images. https://github.com/nitrosocke/dreambooth-training-guide These folks say it's 5-15 to train on a person but I've not tested myself. https://www.reddit.com/r/StableDiffusion/comments/10tqy88/were\_launching\_a\_lightningfast\_dreambooth\_service/
-
We’re launching a lightning-fast Dreambooth service: finetune 1’500 steps in 5min!
See eg this tutorial for styles: https://github.com/nitrosocke/dreambooth-training-guide
- Would it be possible to pretrain generation to mimic my art style?
- Dreambooth model training : dataset labelling
-
Introducing Macro Diffusion - A model fine-tuned on over 700 macro images (Link in the comments)
The first time I tried to Dreambooth a style it went poorly. Then I found Nitrosocke's Dreambooth Training Guide and realized my problems were caused by a poorly redacted dataset.
What are some alternatives?
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
sd_dreambooth_extension
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
StableTuner - Finetuning SD in style.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
dreambooth-gui
Stable-Diffusion-2.0-CPU-or-GPU-Colab-Gradio - Config files for my GitHub profile.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
stable-diffusion-webui - Stable Diffusion web UI
invisible-watermark - python library for invisible image watermark (blind image watermark)
DiffusionToolkit - Metadata-indexer and Viewer for AI-generated images