m1_huggingface_diffusers_demo
stable-diffusion-webui
m1_huggingface_diffusers_demo | stable-diffusion-webui | |
---|---|---|
5 | 104 | |
15 | 5,487 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
m1_huggingface_diffusers_demo
-
JupyterLab 4.0
The trick is that you have to deactivate the virtual environment and then resource it after adding Jupyter to that virtual environment.
Most shells cache executable paths, so the path for jupyter will be the global path, not the one for your virtual environment. This is unfortunately not at all obvious and leads to very hard to track down bugs that seem to disappear and reappear if you aren't familiar with the issue.
I have a recipe here which always works: https://github.com/nlothian/m1_huggingface_diffusers_demo#se...
If you don't have requirements.txt then do this: `pip3 install jupyter` for that line, then `deactivate` and `source ./venv/bin/activate`.
-
Bunny AI
This is how I did it on an M1 in September: https://github.com/nlothian/m1_huggingface_diffusers_demo
I think it probably needs updating now, but it should give you something to start with.
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
On my M1 MAx with 32 GB I'm getting 1.5 iterations/second (ie, ~30 seconds for the standard 50 iterations) using this example: https://github.com/nlothian/m1_huggingface_diffusers_demo
-
Nvidia Hopper Sweeps AI Inference Benchmarks in MLPerf Debut
Out of interest I've been running a bunch of the huggingface version of StableDiffusion using the M1 accelerated branch on my M1 Max[1]. I'm getting 1.54 it/s compared to 2.0 it/s for a Nvidia T4 Tesla on Google Collab.
T4 Tesla gets 21,691 queries/second for for ResNet, compared to 81,292 q/s for the new H100, 41,893 q/s for the A100 and 6164 q/s for the new Jetson.
So you can expect maybe 15,000 q/s on a M1 Max. But some tests seem to indicate a lot less[2] - not sure what is happening there.
[1] Setup like this: https://github.com/nlothian/m1_huggingface_diffusers_demo
[2] https://tlkh.dev/benchmarking-the-apple-m1-max#heading-resne...
stable-diffusion-webui
-
[Stable Diffusion] Je suis confus Aide? - Comment utilisez-vous LDSR avec SD-Webui?
[https://github.com/sd-webui/stable-diffusion-webui/wiki/installation de numéro(https://github.com/sd-webui/stable-diffusion-webui/wiki/installation)
-
[Stable Diffusion] Quelle est la meilleure interface graphique à installer sur Windows?
https://github.com/sd-webui/stable-diffusion-webui (prend beaucoup à installer)
- Daily General Discussion - October 21, 2022
-
Most popular IA to animate?
you can "animate" with stable diffusion usining text to video https://github.com/nateraw/stable-diffusion-videos or https://github.com/sd-webui/stable-diffusion-webui
-
Automatic1111 removed from pinned guide.
I mentioned Automatic1111 on SD-WEBUI and they deleted the comment. I guess this is why. My installation failed on SD-WEBUI and there was no solution for me. I suspect that's why Automatic1111's fork is so popular. He went above and beyond to make sure people with 1660ti's could run SD flawlessly with all the different tools available.
-
.pt to .ckpt
Any way to convert a .pt model to a .ckpt model? Stable-diffusion-webui only seems to support the second type of file but just renaming them does not work:
-
Flooded district by AI
This is Stable-Diffusion. Here is a UI version https://github.com/sd-webui/stable-diffusion-webui
-
AI image generated using the prompt "Streets of Dunwall"
I dunno about the app. I use this https://github.com/sd-webui/stable-diffusion-webui it's very resource hungry though.
-
NMKD Stable Diffusion GUI 1.5.0 is out! Now with exclusion words, CodeFormer face restoration, model merging and pruning tool, even lower VRAM requirements (4 GB), and a ton of quality-of-life improvements. Details in comments.
Haven't tried this GUI yet. Can anyone chime in about how it compares to Automatic1111's and sd-webui/HLKY's? There are so many good repos out there that it's getting hard to keep track of them all
-
Someone just joined 11 GPUs to the Stable Horde. I just tested: 20 gens @ 1024x1024x50 in 2 minutes! All for free!
Maybe those who joined were not aware that they joined the horde :-)
What are some alternatives?
ai-notes - notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
diffusers-uncensored - Uncensored fork of diffusers
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
onnx - Open standard for machine learning interoperability
sd-buddy - Companion desktop app for the self-hosted M1 Mac version of Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
rocm-build - build scripts for ROCm
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion - A latent text-to-image diffusion model
waifu-diffusion - stable diffusion finetuned on weeb stuff