m1_huggingface_diffusers_demo
stable-diffusion-ui
m1_huggingface_diffusers_demo | stable-diffusion-ui | |
---|---|---|
5 | 249 | |
15 | 6,808 | |
- | - | |
10.0 | 9.9 | |
over 1 year ago | 11 months ago | |
Jupyter Notebook | JavaScript | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
m1_huggingface_diffusers_demo
-
JupyterLab 4.0
The trick is that you have to deactivate the virtual environment and then resource it after adding Jupyter to that virtual environment.
Most shells cache executable paths, so the path for jupyter will be the global path, not the one for your virtual environment. This is unfortunately not at all obvious and leads to very hard to track down bugs that seem to disappear and reappear if you aren't familiar with the issue.
I have a recipe here which always works: https://github.com/nlothian/m1_huggingface_diffusers_demo#se...
If you don't have requirements.txt then do this: `pip3 install jupyter` for that line, then `deactivate` and `source ./venv/bin/activate`.
-
Bunny AI
This is how I did it on an M1 in September: https://github.com/nlothian/m1_huggingface_diffusers_demo
I think it probably needs updating now, but it should give you something to start with.
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
On my M1 MAx with 32 GB I'm getting 1.5 iterations/second (ie, ~30 seconds for the standard 50 iterations) using this example: https://github.com/nlothian/m1_huggingface_diffusers_demo
-
Nvidia Hopper Sweeps AI Inference Benchmarks in MLPerf Debut
Out of interest I've been running a bunch of the huggingface version of StableDiffusion using the M1 accelerated branch on my M1 Max[1]. I'm getting 1.54 it/s compared to 2.0 it/s for a Nvidia T4 Tesla on Google Collab.
T4 Tesla gets 21,691 queries/second for for ResNet, compared to 81,292 q/s for the new H100, 41,893 q/s for the A100 and 6164 q/s for the new Jetson.
So you can expect maybe 15,000 q/s on a M1 Max. But some tests seem to indicate a lot less[2] - not sure what is happening there.
[1] Setup like this: https://github.com/nlothian/m1_huggingface_diffusers_demo
[2] https://tlkh.dev/benchmarking-the-apple-m1-max#heading-resne...
stable-diffusion-ui
-
Useful Links
CMDR2's 1-Click Installer
-
Best current stable diffusion UI app?
Hey so i'm currently using easydiffusion but its missing one feature i've been really wanting to play around with recently. Video, and i've heard from some others that its one of the easiest to install but least peformant and worse options you can get; so what do you guys suggest?
-
The softer side of self hosting: The aesthetics, logos
Or just use the CPU and it works, just takes a few minutes. stable diffusion cpu But don't let me stop you, I need one too.
-
So how is nvidia gpu experience these days?
No. CUDA is very straightforward. There is even a nice project that sets up Stable Diffusion for you. With basically no knowledge about AI i was able to get it to run. If i recall correctly i just needed to install one dependency manually, and i was provided with nice web gui for playing with it.
-
Models and samplers…
You could read the guide first: https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Overview or Start easy diffusion
-
Could someone please make my wife into a realistic sculpture/statue? Will tip $50 for a perfect one!
Thanks and your right there are loads, trying out this from GitHub
- What is the text-to-image AI tool?
-
Tip for a (kinda) newbie
Simplest start https://github.com/cmdr2/stable-diffusion-ui
-
SD privacy? Offline? Concerns?
First is Easy Diffusion, you need to be online just once to run the installer. It downloads several extra files. Let it finish and then make some test pictures. Exit everything (browser window and text window). Then anytime you want to run it, just turn off internet and run the batch file to start it up. No internet!
-
Need help installing SD on AMD!!
Yesterday I got Easy Diffusion to work (on Windows only), but it refuses to use the GPU and instead uses the CPU, which of course, takes nearly an hour to make a 512x512 image.
What are some alternatives?
ai-notes - notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
stable-diffusion-webui - Stable Diffusion web UI
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-buddy - Companion desktop app for the self-hosted M1 Mac version of Stable Diffusion
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
civitai - A repository of models, textual inversions, and more
stable-diffusion - A latent text-to-image diffusion model
SHARK - SHARK - High Performance Machine Learning Distribution