StableTuner
prompt-to-prompt | StableTuner | |
---|---|---|
18 | 22 | |
2,860 | 626 | |
2.1% | - | |
3.7 | 10.0 | |
3 months ago | about 1 year ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prompt-to-prompt
- Has google prompt-to-prompt / Cross Attention Control ever been implemented as a plugin for ComfyUI or Automatic1111?
-
[D] CFG role in diffusion vs autoregressive transformers
Found relevant code at https://github.com/google/prompt-to-prompt + all code implementations here
-
Auto1111 Fork with pix2pix
Null text inversion produces almost a perfect textual inversion, and then allows you to edit it with a prompt, like instruct2pix. https://github.com/google/prompt-to-prompt
- Are there ways to use img2img without manually inpainting the clothes of a person I order to change the type of clothing or color of it, etc. I saw a few people here who were able to detect clothing automatically, any advice is welcome 🙏🏼
-
Artists Tomorrow
First we had Google's prompt to prompt https://github.com/google/prompt-to-prompt
-
Backgrounds HATE me?
Narratives can also work, like "walking down forest path". However, it'll be difficult to keep the character positioned the way you want with that. If you're a techie somewhat, you can try to use https://github.com/google/prompt-to-prompt to generate someone you like and then see if you can get a better background without changing the character.
- Anybody here looked into and wanna share the major deviations (if any) between Google's implementation of prompt2prompt vs Doggettx's implementation (which was included in Automatic1111's repo as "Prompt Editing" feature)?
- I did not expect it, but that's the reality now
- Prompt-to-Prompt: Latent Diffusion and Stable Diffusion Implementation
-
[R] can diffusion model be used for domain adaptation?
Google has a nice paper on text-guided image2image translation by inferring the (random) init image and changing the prompt: https://github.com/google/prompt-to-prompt
StableTuner
- What is the best way to train a Stable Diffusion model on a huge dataset?
- How to fine-tune a Stable Diffusion model with hundreds or thousands of images?
- SD fine-tuning methods compared: a benchmark
-
After so many errors with Dreambooth, Everydream2 is the way to go
Of all dreamboothing/finetuning implementations I tried I liked StableTuner the most. Might be worth giving it a shot to compare as well.
-
Non-technical tips for ideal training of Stable Diffusion through Dreambooth?
Largest I've gone is about 100 images for objects or people. I don't think it matters though, it can be a hassle setting up and resuming the training session each time if your doing small sessions. Stable Tuner can simplify all of this by helping you set everything up through their client installed locally. You can then easily do your training locally in short sessions or have it automatically packed up to be exported to colab or another gpu hosting service, also with the ability to train in short sessions. Its a smart way to manage large training projects like yours. It requires a bit of time setting up but most folks who have already played around with dreambooth should be able to navigate their way through easily enough. It has all the other training methods built into it too, including proper fine tuning https://github.com/devilismyfriend/StableTuner
-
Alternative tools to fine tune stable diffusion models?
Some people also like StableTuner: https://github.com/devilismyfriend/StableTuner
- Question about specific character training
-
Finetuning Inpainting model
Stable Tuner seems like it's setup to allow training on regular/inpaint/depth models. https://github.com/devilismyfriend/StableTuner
-
The next best alternative to Auto1111??
StableTuner is an alternative to the sd_dreambooth plugin. It can do Dreambooth and Fine Tuning (I haven't tried this but I think it's embeddings) It uses diffusers but will convert between that and ckpt files, is for Windows/Nvidia, and uses a local app instead of a webapp. This is the only successful local Dreambooth I've done. You'll need to go to their discord for help but it's not hard to use.
-
Auto1111 Fork with pix2pix
Dreambooth is has better results in older commits. StableTuner is better for training : https://github.com/devilismyfriend/StableTuner
What are some alternatives?
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
EveryDream2trainer
cycle-diffusion - [ICCV 2023] A latent space for stochastic diffusion models
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
StyleGAN-nada
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
stable-diffusion-webui - Stable Diffusion web UI
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
EveryDream-trainer - General fine tuning for Stable Diffusion
stable-diffusion-webui-pix2pix - Stable Diffusion web UI
dreambooth-training-guide