Our great sponsors
-
stable-diffusion
Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
stable-diffusion-webui
Discontinued Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui] (by hlky)
-
stable-diffusion
Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it. (by magnusviri)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
sd-webui-colab
Discontinued A repo for the maintenance of the Colab version of stable-diffusion-webui repo
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Depends what fork you're running... Some seem to be using CPU-based generation, others use the MPS device backend correctly which is MUCH faster. I have another comment floating around about lstein's fork, but it takes some massaging to get it to run happily. https://github.com/lstein/stable-diffusion/tree/fix-cuda-res...
Running lstein's fork with these requirements[0] but seeing this output[1]. Same steps as original guide otherwise.
Anyone got any ideas?
[0] https://github.com/bfirsh/stable-diffusion/blob/lstein/requi...
[1] https://gist.github.com/bfirsh/594c50fd9b2e6b173e31de753a842...
If you look at the substance of the changes being made to support Apple Silicon, they're essentially detecting an M* mac and switching to PyTorch's Metal backend.
So, yeah PyTorch is correctly serving as a 'glue'.
https://github.com/CompVis/stable-diffusion/commit/0763d366e...
There are several packages that provide web UIs, like this one for example: https://github.com/hlky/stable-diffusion-webui
It's not quite the ease of setup of an Electron app, but once setup it's pretty easy to use.
Comment from github: "By the way, i confirmed to work on my Intel 16-in MacBook Pro via mps. GPU (Radeon Pro 5500M 8GB) usage is 70-80% and It takes 3 min where --n_samples 1 --n_iter 1. My repo https://github.com/cruller0704/stable-diffusion-intel-mac"
Magnusviro [0], the original author of the SD M1 repo credited in this article, has merged his fork into the Lstein Stable Diffusion repo [1], and you can now run Lstein fork with M1 as of a few hours ago.
This adds a ton of functionality - GUI, Upscaling & Facial improvements, weighted subprompts etc.
This has been a big undertaking over the last few days, and I highly recommend checking it out.
[0] https://github.com/magnusviri/stable-diffusion
I have it working on an RX 6800, used the scripts from this repo[0] to build a docker image that has ROCm drivers and PyTorch installed.
I'm running Ubuntu 22.04 LTS as the host OS, didn't have to touch anything beyond the basic Docker install. Next step is build a new Dockerfile that adds in the Stable Diffusion WebUI.[1]
[0] https://github.com/AshleyYakeley/stable-diffusion-rocm
Yes, you can run it on your Intel CPU: https://github.com/bes-dev/stable_diffusion.openvino
There are hundreds of ways to run it in the cloud (and even more coming every hour!) I think this one is the most popular: https://colab.research.google.com/github/altryne/sd-webui-co...
What's this log message about when generating an image?
Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
[0] https://tvm.apache.org/