stablediffusion
waifu-diffusion
Our great sponsors
stablediffusion | waifu-diffusion | |
---|---|---|
108 | 28 | |
36,226 | 1,926 | |
3.5% | - | |
0.0 | 0.0 | |
19 days ago | about 1 year ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stablediffusion
-
Generating AI Images from your own PC
With this tutorial's help, you can generate images with AI on your own computer with Stable Diffusion.
-
Midjourney
If your PC has a GPU(Nvidia RTX 30series+ recommended) of VRAM more than 4GB then try training your own Stable Diffusion model.
-
RuntimeError: Couldn't clone Stable Diffusion.
Command: "git" clone "https://github.com/Stability-AI/stablediffusion.git" "C:\Users\Naveed\Documents\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai"
-
What is the currently most efficient distribution of Stable Diffusion?
Automatic11112 and sygil-webui aren't "distributions" of Stable Diffusion. This is a repository with some distributions of Stable Diffusion.
-
Reimagine XL: this is just Controlnet with a credit system right?
New stable diffusion finetune (Stable unCLIP 2.1, Hugging Face) at 768x768 resolution, based on SD2.1-768. This model allows for image variations and mixing operations as described in Hierarchical Text-Conditional Image Generation with CLIP Latents, and, thanks to its modularity, can be combined with other models such as KARLO. Comes in two variants: Stable unCLIP-L and Stable unCLIP-H, which are conditioned on CLIP ViT-L and ViT-H image embeddings, respectively. Instructions are available here.
-
Stability AI has released Reimagine XL to make copies of images in one click
This model will soon be open-sourced in StabilityAI’s GitHub.
-
What am I doing wrong please?
Another question, if that's ok? Stable Diffusion 2.0 - https://github.com/Stability-AI/stablediffusion - if I wanted to use that, do I follow along their instructions and it will work on the M1 still, or you advise against it?
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Stable Diffusion (2D Image Generation and Animation) https://github.com/CompVis/stable-diffusion (Stable Diffusion V1) https://huggingface.co/CompVis/stable-diffusion (Stable Diffusion Checkpoints 1.1-1.4) https://huggingface.co/runwayml/stable-diffusion-v1-5 (Stable Diffusion Checkpoint 1.5) https://github.com/Stability-AI/stablediffusion (Stable Difusion V2) https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main (Stable Diffusion Checkpoint 2.1) Stable Diffusion Automatic 1111 Webui and Extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui (WebUI - Easier to use) PLEASE NOTE, MANY EXTENSIONS CAN BE INSTALLED FROM THE WEBUI BY CLICK "AVAILABLE" OR "INSTALL FROM URL" BUT YOU MAY STILL NEED TO DOWNLOAD THE MODEL CHECKPOINTS! https://github.com/Mikubill/sd-webui-controlnet (Control Net Extension - Use various models to control your image generation, useful for animation and temporal consistency) https://huggingface.co/lllyasviel/ControlNet/tree/main/models (Control Net Checkpoints -Canny, Normal, OpenPose, Depth, ect.) https://github.com/thygate/stable-diffusion-webui-depthmap-script (Depth Map Extension - Generate high-resolution depthmaps and animated videos or export to 3d modeling programs) https://github.com/graemeniedermayer/stable-diffusion-webui-normalmap-script (Normal Map Extension - Generate high-resolution normal maps for use in 3d programs) https://github.com/d8ahazard/sd_dreambooth_extension (Dream Booth Extension - Train your own objects, people, or styles into Stable Diffusion) https://github.com/deforum-art/sd-webui-deforum (Deforum - Generate Weird 2D animations) https://github.com/deforum-art/sd-webui-text2video (Deforum Text2Video - Generate videos from texts prompts using ModelScope or VideoCrafter)
-
Is AI technology really the issue?
Stable Diffusion's code : https://github.com/Stability-AI/stablediffusion
-
I've never seen a YAML file alongside a .ckpt or .safetensors
But if you want to run a 2.x-based model, you'll need to download the corresponding YAML file (either the standard one – v2-inference-v.yaml – from Github or the one that is distributed with the model, if it requires a special one), rename it to have the same name as the model, and place it in the models folder alongside the model.
waifu-diffusion
- What exactly constitutes a new, different model?
- AI: "All Your Horny Belong to Us"
-
Cool Japan Diffusion 2.1.1 has been released! 🎉
It's slander to say that the WD team is milking donations in the same way UD is. They already have a public working model which they are continuing to train. The WD team has also implemented features like aspect ratio bucketing into their custom trainer. If they were really milking the project for donations they would have just used the base compvis trainer it was forked from.
-
From one of the original DreamBooth authors : Stop using SKS as the initializer word
Use one of the original SD repos, or the code for Waifu Diffusion, or the Smirkingface refactor.
- Stable Diffusion links from around October 4, 2022 that I collected for further processing
-
These images of Senko were Generated by AI (Part 2 - Halloween themed)
My model is based off waifu-diffusion.
-
What is the name of the AI that created anime art that people keep talking about?
waifu diffusion (free) and NovelAI (paid afaik) is what I know
-
Any finetuners (actual finetuning, not Dreambooth) here who can help me find a good learning rate and epoch? At batch size 12/16 (I am using a single A100) with a learning rate of 1e-5/1e-4 I am overfitting (turning all characters into my character) while the style still isn't there yet with ep14...
This is the one I use: https://github.com/harubaru/waifu-diffusion
-
Is there a way to tell the AI which artstyle to use?
This is called textual inversion which use many images and a small group of promtps to describe a single element, and there are two methods: with frozen model and with unfrozen model. The first one just creates a new “word” to “guide” the model, in other words, a word that means the same as a chunk of a prompt. The resulting data is a tiny file. The second one (better known dreamboth training) adds new data to de model, so it can copy the character’s characteristics and artist style more effecively resulting in a new model file (cpkt) There are not a single tutorial, so you can find many info on r/StableDiffusion. Also, there is a third method which is based on the original training (many images and its own description) Instruction are here: Training guide I don’t know if this guide is finished though
-
Tsukasa at the Beach (Created by a Waifu-Diffusion AI)
The model is a tweaked version of waifu-diffusion using textual inversion. Still not perfect but the AI can create pretty plausible images.
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stable-diffusion-webui - Stable Diffusion web UI
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
merge-models - Merges two latent diffusion models at a user-defined ratio
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]