stable-diffusion
DISCONTINUED
k-diffusion
Our great sponsors
stable-diffusion | k-diffusion | |
---|---|---|
142 | 20 | |
2,438 | 2,006 | |
- | - | |
9.8 | 8.9 | |
over 1 year ago | about 2 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
-
I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
Stable Diffusion is wild - the space has been quickly developing and watching the pace of development makes me reconsider what I consider "staggering". I've been blown away. The accessibility of this technology is even more incredible - there's even a fork that is working on M1 Macs (https://github.com/lstein/stable-diffusion)
We are in for some interesting times. Whatever the next iteration of Textual Inversion is will be extremely disruptive, especially if the concepts continue to be developed collectively.
-
AI Seamless Texture Generator Built-In to Blender
Oh, it generates from a text prompt, not a sample texture. I thought this was just a tool to generate wrapped textures from non-wrapped ones.
The licensing is a mess. The Blender plug-in is GPL 3, the stable diffusion code is MIT, and the weights for the model have a very restrictive custom license.[1] Whether the weights, which are program-generated, are copyrightable is a serious legal question.
[1] https://github.com/lstein/stable-diffusion/blob/61f46cac31b5...
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
Yes, works perfectly fine as of 1e8e5245ebca5211e271f35a3a849dee8f4793d2 which contains the performance improvements. Probably works fine with later commits too but I haven't personally tested them so won't vouch for it.
k-diffusion
-
Fooocus: OSS of prompts and generations based on A111 and ComfyUI
Here's my attempt at an explanation without jargon, you can just read the last paragraph, the first 4 are just context.
These image models are trained on 1000 steps of noise, where at 0 no noise is added to the training image and at 1000 the image is pure noise. The model's goal it to denoise the image, and it does this knowing how much noise the image has, this makes the model learn how much it should change the image, for example at high noise it changes a lot of pixels and starts building the overall "structure" of the image, and a low noise it changes less pixels and focuses on adding details.
To use the model you start with pure noise, then the model iteratively denoises that noise until a clean image shows up. A naive approach would take 1000 steps, this means you run the model 1000 times, each time feeding the previous result and telling the model that the noise decreased by 1 until it reaches 0 noise. This takes a long time, up to 15 minutes to generate an image on a mid-range consumer GPU.
Turns out when you give the model pure noise and tell it there's 1000 steps of noise, the result is not an image that has 999 steps of noise, but an image that looks like it has much less, this means that you can probably skip 50-100 steps of denoising per iteration and still get a very good picture, the issue is: what steps to pick? You could again take a naive approach and just skip every 50 steps for a total of 20 steps, but turns out there's better ways.
This is where samplers come in, essentially a sampler takes the number of steps you want to take to denoise an image (usually ~20 steps) and it will--among other things--pick which steps to choose each iteration. The most popular samplers are the samplers in the k-diffusion repo[1] or k-samplers for short. Do note that samplers do much more than just pick the steps, they are actually responsible for doing the denoising process itself, some of them even add a small noise after a denoising step among other things.
The newest open source model, SDXL, is actually 2 models. A base model that can generate images as normal, and a refiner model that is specialized on adding details to images. A typical workflow is to ask the base model for 25 steps of denoise, but only run the first 20, then use the refiner model to do the rest. According to the OP, this was being done without keeping the state of the sampler, that is they were running 2 samplers separately, one for the base model and then start one over for the refiner model. Since the samplers use historical data for optimization, the end result was not ideal.
- Why does UniPC sampler use DDIM for Hires Fix?
-
Image editing with just text prompt. New Instruct2Pix2Pix paper. Demo link in comments
git clone https://github.com/crowsonkb/k-diffusion.git
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
Almost all other samplers come from work done by @RiversHaveWings or Katherine Crowson, which is mostly contained in her work at this repository. She is listed as the principal researcher at Stability AI. Her notes for those samplers are as follows:
- AUTOMATIC1111 added more samplers, so here's a creepy clown comparison
-
Sampler Comparison (incl. K-Diffusion)
there's an implementation of the other samplers at the k-diffusion repo. For one integrated with stable diffusion I'd check out this fork of stable that has the files txt2img_k and img2img_k. to use the different samplers just change "K.sampling.sample_lms" on line 276 of img2img_k, or line 285 of txt2img_k to a different sampler, e.g. K.sampling.sample_dpm_2_ancestral.
-
Randomizing Seeds when running locally
There's a collab version with it but not a github optimizedSD one. I found this but have no idea how to get them working together?
-
Dreambot clone available for running stable-diffusion on local GPU
Are you planning on adding the k_lms sampling (https://github.com/crowsonkb/k-diffusion) that is default sampler for the discord bot? I noticed on the current official Stable Diffusion repo it doesn't have it and defaults to plms.
What are some alternatives?
waifu-diffusion - stable diffusion finetuned on weeb stuff
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
stable-diffusion-webui - Stable Diffusion web UI
diffusers-uncensored - Uncensored fork of diffusers
txt2imghd - A port of GOBIG for Stable Diffusion
dream-textures - Stable Diffusion built-in to Blender
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stable-diffusion - A latent text-to-image diffusion model
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
stable-diffusion - k_diffusion wrapper included for k_lms sampling. fixed for notebook.