m1_huggingface_diffusers_demo
stable-diffusion
m1_huggingface_diffusers_demo | stable-diffusion | |
---|---|---|
5 | 142 | |
15 | 2,438 | |
- | - | |
10.0 | 9.8 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
m1_huggingface_diffusers_demo
-
JupyterLab 4.0
The trick is that you have to deactivate the virtual environment and then resource it after adding Jupyter to that virtual environment.
Most shells cache executable paths, so the path for jupyter will be the global path, not the one for your virtual environment. This is unfortunately not at all obvious and leads to very hard to track down bugs that seem to disappear and reappear if you aren't familiar with the issue.
I have a recipe here which always works: https://github.com/nlothian/m1_huggingface_diffusers_demo#se...
If you don't have requirements.txt then do this: `pip3 install jupyter` for that line, then `deactivate` and `source ./venv/bin/activate`.
-
Bunny AI
This is how I did it on an M1 in September: https://github.com/nlothian/m1_huggingface_diffusers_demo
I think it probably needs updating now, but it should give you something to start with.
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
On my M1 MAx with 32 GB I'm getting 1.5 iterations/second (ie, ~30 seconds for the standard 50 iterations) using this example: https://github.com/nlothian/m1_huggingface_diffusers_demo
-
Nvidia Hopper Sweeps AI Inference Benchmarks in MLPerf Debut
Out of interest I've been running a bunch of the huggingface version of StableDiffusion using the M1 accelerated branch on my M1 Max[1]. I'm getting 1.54 it/s compared to 2.0 it/s for a Nvidia T4 Tesla on Google Collab.
T4 Tesla gets 21,691 queries/second for for ResNet, compared to 81,292 q/s for the new H100, 41,893 q/s for the A100 and 6164 q/s for the new Jetson.
So you can expect maybe 15,000 q/s on a M1 Max. But some tests seem to indicate a lot less[2] - not sure what is happening there.
[1] Setup like this: https://github.com/nlothian/m1_huggingface_diffusers_demo
[2] https://tlkh.dev/benchmarking-the-apple-m1-max#heading-resne...
stable-diffusion
- [Stable Diffusion] Aide nécessaire à l'augmentation de la taille du fichier maximum sur l'installation locale
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
- I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
-
AI Seamless Texture Generator Built-In to Blender
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
-
Stable Diffusion GUIs for Apple Silicon
Stable Diffusion Dream Script: This is the original site/script for supporting macOS. I found this soon after Stable Diffusion was publicly released and it was the site which inspired me to try out using Stable Diffusion on a mac. They have a web-based UI (as well as command-line scripts) and a lot of documentation on how to get things working.
-
Still can't believe this technology is real. My talentless 2 minute sketch on the left.
I’m pretty sure it works for M2 as well - basically the newer ARM-based Macs. The instructions to get it working are detailed! https://github.com/lstein/stable-diffusion
What are some alternatives?
ai-notes - notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
waifu-diffusion - stable diffusion finetuned on weeb stuff
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
sd-buddy - Companion desktop app for the self-hosted M1 Mac version of Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
diffusers-uncensored - Uncensored fork of diffusers
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
txt2imghd - A port of GOBIG for Stable Diffusion
stable-diffusion - A latent text-to-image diffusion model
dream-textures - Stable Diffusion built-in to Blender