-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
stable-diffusion
Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it. (by magnusviri)
I'm using the fork here: https://github.com/magnusviri/stable-diffusion.git (apple-silicon-mps-support branch).
Pretty easy to set up, though I had to take all the Homebrew stuff out of my environment before setting up the Conda environment (can also just export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1, at least in my case).
Otherwise, I followed the normal steps to set things up, and I'm now here generating 1 image every 30 seconds at default settings. This is on a M1 Max MacBook Pro at 64GB RAM.
-
PyTorch for m1 (https://pytorch.org/blog/introducing-accelerated-pytorch-tra... ) will not work: https://github.com/CompVis/stable-diffusion/issues/25 says
-
stable-diffusion
Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)
-
I found a pretty good Docker container for it, though that's only really switching you from solving Python problems to Docker ones. Worth trying out if you have a Linux box or WSL installed though: https://github.com/AbdBarho/stable-diffusion-webui-docker
-
I found this repo early on and have been using it to run inference on my M1 Pro MBP. https://github.com/ModeratePrawn/stable-diffusion-cpu
For me it runs at about 3.5 seconds per iteration per picture at 512x512.
There is also a fork that uses metal here and is much faster: https://github.com/magnusviri/stable-diffusion/tree/apple-si...
Related posts
-
QUIK is a method for quantizing LLM post-training weights to 4 bit precision
-
Intel OpenVINO 2023.1.0 released
-
Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
-
OpenVINO 2023.1.0 released
-
[N] Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference