k-diffusion
latent-diffusion
k-diffusion | latent-diffusion | |
---|---|---|
20 | 70 | |
2,078 | 10,622 | |
- | 2.8% | |
8.4 | 0.0 | |
6 days ago | 2 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k-diffusion
-
Fooocus: OSS of prompts and generations based on A111 and ComfyUI
Here's my attempt at an explanation without jargon, you can just read the last paragraph, the first 4 are just context.
These image models are trained on 1000 steps of noise, where at 0 no noise is added to the training image and at 1000 the image is pure noise. The model's goal it to denoise the image, and it does this knowing how much noise the image has, this makes the model learn how much it should change the image, for example at high noise it changes a lot of pixels and starts building the overall "structure" of the image, and a low noise it changes less pixels and focuses on adding details.
To use the model you start with pure noise, then the model iteratively denoises that noise until a clean image shows up. A naive approach would take 1000 steps, this means you run the model 1000 times, each time feeding the previous result and telling the model that the noise decreased by 1 until it reaches 0 noise. This takes a long time, up to 15 minutes to generate an image on a mid-range consumer GPU.
Turns out when you give the model pure noise and tell it there's 1000 steps of noise, the result is not an image that has 999 steps of noise, but an image that looks like it has much less, this means that you can probably skip 50-100 steps of denoising per iteration and still get a very good picture, the issue is: what steps to pick? You could again take a naive approach and just skip every 50 steps for a total of 20 steps, but turns out there's better ways.
This is where samplers come in, essentially a sampler takes the number of steps you want to take to denoise an image (usually ~20 steps) and it will--among other things--pick which steps to choose each iteration. The most popular samplers are the samplers in the k-diffusion repo[1] or k-samplers for short. Do note that samplers do much more than just pick the steps, they are actually responsible for doing the denoising process itself, some of them even add a small noise after a denoising step among other things.
The newest open source model, SDXL, is actually 2 models. A base model that can generate images as normal, and a refiner model that is specialized on adding details to images. A typical workflow is to ask the base model for 25 steps of denoise, but only run the first 20, then use the refiner model to do the rest. According to the OP, this was being done without keeping the state of the sampler, that is they were running 2 samplers separately, one for the base model and then start one over for the refiner model. Since the samplers use historical data for optimization, the end result was not ideal.
[1] https://github.com/crowsonkb/k-diffusion
-
Is it possible to install dpm++ 2s a karras on InvokeAI? 🙏
I believe all the advanced samplers are defined upstream in this repo by crowsonkb. As for "loading them" into invoke, you would need to modify the invokeAI source code to define new samplers. The good news is since it's all in python, you don't need to do any compiling.
- Why does UniPC sampler use DDIM for Hires Fix?
-
Can someone ELI5 the differences between samplers?
The K Diffusion samplers are probably the most advanced currently.
-
Is there a resource that has list of samplers for SD? Like https://upscale.wiki/wiki/Model_Database for upscalers?
I don't know of any Sampler that is not already in A1111, and this is the closest thing to a "list of Samplers for SD".
-
Different Samplers?
This is the main source of all the Samplers we see in the various SD UI's. The source code has references to published papers behind the samplers. Aside from this, I haven't found a wiki for them.
-
Image editing with just text prompt. New Instruct2Pix2Pix paper. Demo link in comments
git clone https://github.com/crowsonkb/k-diffusion.git
-
The sampler vibe started with LMS, then there was a big migration to using EULER A. Are many now moving to DPM++ e.g. DPM++ 2S a Karras and why?
Am curious in seeing what drives these choices. I think LMS was the default in Dreamstudio when Stable Diffusion was released. Then Euler A became the default in AUTOMATIC1111 which I think explained a lot. But now that many people are more literate about samplers it looks like these decision are more deliberate. With a lot more samplers implemented in https://github.com/crowsonkb/k-diffusion and added to AUTOMATIC1111, is speed the main driver (DPM++ is a lot about speed https://arxiv.org/abs/2211.01095), what about image quality? What are your thoughts?
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
Almost all other samplers come from work done by @RiversHaveWings or Katherine Crowson, which is mostly contained in her work at this repository. She is listed as the principal researcher at Stability AI. Her notes for those samplers are as follows:
- K-diffusion: Karras et al. (2022) diffusion models for PyTorch
latent-diffusion
-
SDXL: The next generation of Stable Diffusion models for text-to-image synthesis
Stable Diffusion XL (SDXL) is the latest text-to-image generation model developed by Stability AI, based on the latent diffusion techniques. SDXL has the potential to create highly realistic images for media, entertainment, education, and industry domains, opening new ways in practical uses of AI imagery.
-
Is it possible to create a checkpoint from scratch?
Here's a link to the early latent-diffusion git, that might be able to create a blank model (I haven't tested it): https://github.com/CompVis/latent-diffusion
-
Anything better than pix2pixHD?
Latent diffusion could work for you: https://github.com/CompVis/latent-diffusion (https://arxiv.org/abs/2112.10752)
-
Image Upscaler AI
There are a lot but the one implemented as LDSR in most stable guis is this one. https://github.com/CompVis/latent-diffusion
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
CompVis/latent-diffusion: High-Resolution Image Synthesis with Latent Diffusion Models (github.com)
-
Run Clip on iPhone to Search Photos
The "retrieval based model" refers to https://github.com/CompVis/latent-diffusion#retrieval-augmen..., which uses ScaNN to train a knn embedding searcher.
-
Class Action Lawsuit filed against Stable Diffusion and Midjourney.
Stability is basically https://github.com/CompVis/latent-diffusion + training data.
-
[D] Influential papers round-up 2022. What are your favorites?
Found relevant code at https://github.com/CompVis/latent-diffusion + all code implementations here
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
DDIM and PLMS were the original samplers. They were part of Latent Diffusion's repository. They stand for the papers that introduced them, Denoising Diffusion Implicit Models and Pseudo Numerical Methods for Diffusion Models on Manifolds.
-
AI art is very dystopian.
yes, https://github.com/CompVis/latent-diffusion
What are some alternatives?
stable-diffusion - k_diffusion wrapper included for k_lms sampling. fixed for notebook.
disco-diffusion
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
dalle-mini - DALL·E Mini - Generate images from a text prompt
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
hent-AI - Automation of censor bar detection
Fooocus - Focus on prompting and generating
dalle-2-preview
instruct-pix2pix
stable-diffusion
dpm-solver - Official code for "DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps" (Neurips 2022 Oral)
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch