m1_huggingface_diffusers_demo
m1_huggingface_diffusers_demo | stable-diffusion | |
---|---|---|
5 | 8 | |
15 | 436 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 12 months ago | |
Jupyter Notebook | ||
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
m1_huggingface_diffusers_demo
-
JupyterLab 4.0
The trick is that you have to deactivate the virtual environment and then resource it after adding Jupyter to that virtual environment.
Most shells cache executable paths, so the path for jupyter will be the global path, not the one for your virtual environment. This is unfortunately not at all obvious and leads to very hard to track down bugs that seem to disappear and reappear if you aren't familiar with the issue.
I have a recipe here which always works: https://github.com/nlothian/m1_huggingface_diffusers_demo#se...
If you don't have requirements.txt then do this: `pip3 install jupyter` for that line, then `deactivate` and `source ./venv/bin/activate`.
-
Bunny AI
This is how I did it on an M1 in September: https://github.com/nlothian/m1_huggingface_diffusers_demo
I think it probably needs updating now, but it should give you something to start with.
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
On my M1 MAx with 32 GB I'm getting 1.5 iterations/second (ie, ~30 seconds for the standard 50 iterations) using this example: https://github.com/nlothian/m1_huggingface_diffusers_demo
-
Nvidia Hopper Sweeps AI Inference Benchmarks in MLPerf Debut
Out of interest I've been running a bunch of the huggingface version of StableDiffusion using the M1 accelerated branch on my M1 Max[1]. I'm getting 1.54 it/s compared to 2.0 it/s for a Nvidia T4 Tesla on Google Collab.
T4 Tesla gets 21,691 queries/second for for ResNet, compared to 81,292 q/s for the new H100, 41,893 q/s for the A100 and 6164 q/s for the new Jetson.
So you can expect maybe 15,000 q/s on a M1 Max. But some tests seem to indicate a lot less[2] - not sure what is happening there.
[1] Setup like this: https://github.com/nlothian/m1_huggingface_diffusers_demo
[2] https://tlkh.dev/benchmarking-the-apple-m1-max#heading-resne...
stable-diffusion
-
DALL·E Now Available Without Waitlist
No, sorry, but there's a whole bunch of one-click things now, I think?
I'm running it on Windows 10 using (a modified version of) https://github.com/bfirsh/stable-diffusion.git and Anaconda to create the environment from their `environment.yaml` (all of which was done using the normal `cmd` shell). Then to use it, I activate that env from `cmd` and switch into cygwin `bash` to run the `txt2img.py` script (because it's easier to script, etc.)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I am using the bfirsh version. And yes, I run "pyhthon scripts/txt2imp.py" to generate an image.
-
Current canonical way to install Stable Diffusion on Apple Silicon?
Specifically regarding the first option above, I see that the procedure clones the repository from: https://github.com/bfirsh/stable-diffusion.git
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
Just done a run on my 3080 under Windows using https://github.com/bfirsh/stable-diffusion.git and it's about 8 iterations/sec when nothing else is using CPU or GPU.
-
Using the same seed and same prompt is still resulting in two different images?
I've cloned this repository on my M1 Mac: https://github.com/bfirsh/stable-diffusion/tree/apple-silicon-mps-support
-
Run Stable Diffusion on Your M1 Mac’s GPU
Boom - nice. Here's a fork with that: https://github.com/bfirsh/stable-diffusion/tree/lstein
Requirements are "requirements-mac.txt" which'll need subbing in the guide.
We're testing this out with a few people in Discord before shipping to the blog post.
What are some alternatives?
ai-notes - notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
stable_diffusion.openvino
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
sd-buddy - Companion desktop app for the self-hosted M1 Mac version of Stable Diffusion
sd-webui-colab - A repo for the maintenance of the Colab version of stable-diffusion-webui repo
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
invisible-watermark - python library for invisible image watermark (blind image watermark)
stable-diffusion - A latent text-to-image diffusion model