stable-diffusion
Our great sponsors
stable-diffusion | stable-diffusion | |
---|---|---|
8 | 5 | |
436 | 94 | |
- | - | |
0.0 | 0.0 | |
12 months ago | about 1 year ago | |
Jupyter Notebook | ||
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
-
DALL·E Now Available Without Waitlist
No, sorry, but there's a whole bunch of one-click things now, I think?
I'm running it on Windows 10 using (a modified version of) https://github.com/bfirsh/stable-diffusion.git and Anaconda to create the environment from their `environment.yaml` (all of which was done using the normal `cmd` shell). Then to use it, I activate that env from `cmd` and switch into cygwin `bash` to run the `txt2img.py` script (because it's easier to script, etc.)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I am using the bfirsh version. And yes, I run "pyhthon scripts/txt2imp.py" to generate an image.
-
Current canonical way to install Stable Diffusion on Apple Silicon?
Specifically regarding the first option above, I see that the procedure clones the repository from: https://github.com/bfirsh/stable-diffusion.git
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
Just done a run on my 3080 under Windows using https://github.com/bfirsh/stable-diffusion.git and it's about 8 iterations/sec when nothing else is using CPU or GPU.
-
Using the same seed and same prompt is still resulting in two different images?
I've cloned this repository on my M1 Mac: https://github.com/bfirsh/stable-diffusion/tree/apple-silicon-mps-support
-
Run Stable Diffusion on Your M1 Mac’s GPU
Boom - nice. Here's a fork with that: https://github.com/bfirsh/stable-diffusion/tree/lstein
Requirements are "requirements-mac.txt" which'll need subbing in the guide.
We're testing this out with a few people in Discord before shipping to the blog post.
stable-diffusion
-
Fixing excessive contrast/saturation resulting from high CFG scales
I'm using a modified noise schedule (Karras et al, arXiv:2206.00364) taken from the LAION Discord user's fork (here). With that schedule, from their testing and my own, k_heun seems to perform about 3x better than others at equivalent steps (each step takes about 2x longer, but it's still a win). Also it performs well even with as low as 7 steps. I'd be surprised if euler was far superior since from my understanding, heun is basically an improved version of it.
- Run Stable Diffusion on Your M1 Mac’s GPU
- The 'dummies' are craving an even 'dummier' tutorial (please)
What are some alternatives?
stable_diffusion.openvino
invisible-watermark - python library for invisible image watermark (blind image watermark)
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
sd-webui-colab - A repo for the maintenance of the Colab version of stable-diffusion-webui repo
stable-diffusion-intel-mac
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
gradi
stable-diffusion - A latent text-to-image diffusion model
onnx - Open standard for machine learning interoperability
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.