stablediffusion
EveryDream-trainer
stablediffusion | EveryDream-trainer | |
---|---|---|
108 | 32 | |
36,333 | 501 | |
1.8% | - | |
0.0 | 2.4 | |
about 1 month ago | about 1 year ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stablediffusion
-
Generating AI Images from your own PC
With this tutorial's help, you can generate images with AI on your own computer with Stable Diffusion.
-
Midjourney
If your PC has a GPU(Nvidia RTX 30series+ recommended) of VRAM more than 4GB then try training your own Stable Diffusion model.
-
RuntimeError: Couldn't clone Stable Diffusion.
Command: "git" clone "https://github.com/Stability-AI/stablediffusion.git" "C:\Users\Naveed\Documents\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai"
-
What is the currently most efficient distribution of Stable Diffusion?
Automatic11112 and sygil-webui aren't "distributions" of Stable Diffusion. This is a repository with some distributions of Stable Diffusion.
-
Reimagine XL: this is just Controlnet with a credit system right?
New stable diffusion finetune (Stable unCLIP 2.1, Hugging Face) at 768x768 resolution, based on SD2.1-768. This model allows for image variations and mixing operations as described in Hierarchical Text-Conditional Image Generation with CLIP Latents, and, thanks to its modularity, can be combined with other models such as KARLO. Comes in two variants: Stable unCLIP-L and Stable unCLIP-H, which are conditioned on CLIP ViT-L and ViT-H image embeddings, respectively. Instructions are available here.
-
Stability AI has released Reimagine XL to make copies of images in one click
This model will soon be open-sourced in StabilityAI’s GitHub.
-
What am I doing wrong please?
Another question, if that's ok? Stable Diffusion 2.0 - https://github.com/Stability-AI/stablediffusion - if I wanted to use that, do I follow along their instructions and it will work on the M1 still, or you advise against it?
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Stable Diffusion (2D Image Generation and Animation) https://github.com/CompVis/stable-diffusion (Stable Diffusion V1) https://huggingface.co/CompVis/stable-diffusion (Stable Diffusion Checkpoints 1.1-1.4) https://huggingface.co/runwayml/stable-diffusion-v1-5 (Stable Diffusion Checkpoint 1.5) https://github.com/Stability-AI/stablediffusion (Stable Difusion V2) https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main (Stable Diffusion Checkpoint 2.1) Stable Diffusion Automatic 1111 Webui and Extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui (WebUI - Easier to use) PLEASE NOTE, MANY EXTENSIONS CAN BE INSTALLED FROM THE WEBUI BY CLICK "AVAILABLE" OR "INSTALL FROM URL" BUT YOU MAY STILL NEED TO DOWNLOAD THE MODEL CHECKPOINTS! https://github.com/Mikubill/sd-webui-controlnet (Control Net Extension - Use various models to control your image generation, useful for animation and temporal consistency) https://huggingface.co/lllyasviel/ControlNet/tree/main/models (Control Net Checkpoints -Canny, Normal, OpenPose, Depth, ect.) https://github.com/thygate/stable-diffusion-webui-depthmap-script (Depth Map Extension - Generate high-resolution depthmaps and animated videos or export to 3d modeling programs) https://github.com/graemeniedermayer/stable-diffusion-webui-normalmap-script (Normal Map Extension - Generate high-resolution normal maps for use in 3d programs) https://github.com/d8ahazard/sd_dreambooth_extension (Dream Booth Extension - Train your own objects, people, or styles into Stable Diffusion) https://github.com/deforum-art/sd-webui-deforum (Deforum - Generate Weird 2D animations) https://github.com/deforum-art/sd-webui-text2video (Deforum Text2Video - Generate videos from texts prompts using ModelScope or VideoCrafter)
-
Is AI technology really the issue?
Stable Diffusion's code : https://github.com/Stability-AI/stablediffusion
-
I've never seen a YAML file alongside a .ckpt or .safetensors
But if you want to run a 2.x-based model, you'll need to download the corresponding YAML file (either the standard one – v2-inference-v.yaml – from Github or the one that is distributed with the model, if it requires a special one), rename it to have the same name as the model, and place it in the models folder alongside the model.
EveryDream-trainer
- How should I train Dreambooth to understand a new class?
- SDTools v1.5
-
Guide on finetuning a model with mid-sized dataset of family pictures
https://github.com/victorchall/EveryDream-trainer Haven't tried it myself.
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
In terms of training, you've got some really good links and comments to youtube tutorials, but if you're interested in more information about finetuning a model (as opposed to training from scratch), this is a good repo that has a lot of tools for finetuning, including an auto-captioner using BLIP and automatic file renaming. This is the actual finetuning repo.
-
Alternative tools to fine tune stable diffusion models?
Every Dream Trainer: Is basically a Dreambooth combine with Fine Tunning, so you can train multiples thing and a lot images: https://github.com/victorchall/EveryDream-trainer
-
Training with Dreambooth Models and/or Training with Automatic 1111 Textural Inversion
If you have the GPU for it, I'd recommend training all three things at once with (for example) https://github.com/victorchall/EveryDream-trainer. It recommends using "ground truth" training images - i.e. images from LAION-5B, which Stable Diffusion was originally trained with to have better prior preservation (retaining the flexibility of the original model) while incorporating new concepts, potentially even several different concepts in a single training run.
-
Flexible-Diffusion. My first experiment with finetuning. A broad model with better general aesthetics and coherence for different styles! Scroll for 1.5 vs FlexibleDiffusion grids. (BTW, PublicPrompts.art is back!!!)
I used about 300 captioned images (mainly beautiful MJ stuff), and used https://github.com/victorchall/EveryDream-trainer on RunPod for finetuning
- What do you think is the right dataset size to train/refine on dreambooth?
-
Practice your christmas cookies before you bake with this SD 1.5 model
SD 1.5 512x512 model for making christmas style cookies of whatever you'd like. trained on 30 512x512 images with manual captions in everydream
-
Guide for train/finetune with different image sizes, not dreambooth
This is the good one: https://github.com/victorchall/EveryDream-trainer
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
kohya_ss
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
StableTuner - Finetuning SD in style.
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
EveryDream - Advanced fine tuning tools for vision models
civitai - A repository of models, textual inversions, and more
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
EveryDream2trainer
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI