Our great sponsors
-
stable-diffusion
Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it. (by magnusviri)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
InvokeAI
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
-
stable-diffusion-webui
Discontinued Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui] (by sd-webui)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Nope. There are instructions for Windows, Linux and Apple Silicon in the readme: https://github.com/AUTOMATIC1111/stable-diffusion-webui
There's also this other fork, which also has a Colab notebook ready to run, and it's way, way faster than the KerasCV version: https://github.com/TheLastBen/fast-stable-diffusion
(It also has a many, many more options and some nice, user-friendly GUIs. It's the best version for Google Colab!)
Not M1 comparible but I'm working on testing various GPU vs M1 comparisons, with a few accessible cloud providers. My impression is times should be the same, but it's nice to hear other real-world stats for M1 with SD. Makes me really want to rent the Hetzner M1 now.
Which repo or build are you using BTW, is it the one related to this readme?
https://github.com/magnusviri/stable-diffusion/blob/main/REA...
While AUTOMATIC is certainly popular, calling it the most active/popular would be ignoring the community working on Invoke. Forks don’t lie.
https://github.com/invoke-ai/InvokeAI
On intel MacBookPro, CPU-only, the original one[1] using pytorch only utilized one core. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.
[1] https://github.com/CompVis/stable-diffusion/
[2] https://github.com/divamgupta/stable-diffusion-tensorflow/
[3] https://github.com/bes-dev/stable_diffusion.openvino/
Nope. There are instructions for Windows, Linux and Apple Silicon in the readme: https://github.com/AUTOMATIC1111/stable-diffusion-webui
There's also this other fork, which also has a Colab notebook ready to run, and it's way, way faster than the KerasCV version: https://github.com/TheLastBen/fast-stable-diffusion
(It also has a many, many more options and some nice, user-friendly GUIs. It's the best version for Google Colab!)
On intel MacBookPro, CPU-only, the original one[1] using pytorch only utilized one core. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.
[1] https://github.com/CompVis/stable-diffusion/
[2] https://github.com/divamgupta/stable-diffusion-tensorflow/
[3] https://github.com/bes-dev/stable_diffusion.openvino/
> This is by far the most popular and active right now: https://github.com/AUTOMATIC1111/stable-diffusion-webui
While technically the most popular, I really wouldn't call it "by far". This one is a very close second (500 vs 580 forks): https://github.com/sd-webui/stable-diffusion-webui/tree/dev