web-stable-diffusion
stable-diffusion-paperspace
web-stable-diffusion | stable-diffusion-paperspace | |
---|---|---|
21 | 13 | |
3,455 | 283 | |
1.6% | - | |
4.4 | 0.6 | |
about 2 months ago | 2 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | The Unlicense |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
web-stable-diffusion
-
GPU-Accelerated LLM on a $100 Orange Pi
Yup, here's their web stable diffusion repo: https://github.com/mlc-ai/web-stable-diffusion
The input is a model (weights + runtime lib) compiled via the mlc-llm project: https://mlc.ai/mlc-llm/docs/compilation/compile_models.html
-
StableDiffusion can now run directly in the browser on WebGPU
The MLC team got that working back in March: https://github.com/mlc-ai/web-stable-diffusion
Even more impressively, they followed up with support for several Large Language Models: https://webllm.mlc.ai/
- Web StableDiffusion
-
[Stable Diffusion] Diffusion stable Web: exécution de diffusion stable directement dans le navigateur sans serveur GPU
[https://github.com/mlc-ai/web-stable-diffusion
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
You can run it directly in the browser with WebGPU, https://mlc.ai/web-stable-diffusion/
-
I've got Stable Diffusion integrated into my site now, fully client side with no setup or servers.
Using the amazing work of https://mlc.ai/web-stable-diffusion/ I've got the code moved into a Web Worker and running fully local client side. It does require 2GB's of model files be downloaded (automatically), and takes a few minutes for the first load, but it works and once it's going it only takes 20s to make a 512x512 image.
-
Chrome Ships WebGPU
The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].
Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.
[1] https://github.com/mlc-ai/web-stable-diffusion
- Bringing stable diffusion models to web browsers
- mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
- Web Stable Diffusion: Running Diffusion Models with WebGPU
stable-diffusion-paperspace
-
Is there any notebook I can run on paperspace to train a Lora?
The other path I am going now: https://github.com/Engineer-of-Stuff/stable-diffusion-paperspace This is automatic1111. It is a very good written notebook. One thing I noticed that i could not install extensions. But when I turned on the password (that you can do in the early settup stages of this notebook, just enter a login name and password there) then extensions will be unlocked... otherwise rando`s can come into your notebook and install extensions...
-
Free GPU for Stable Diffusion WebUI?
The "easiest" (due to the provider being okay with it) may be to use Paperspace in conjunction with this notebook
-
Official statement from the person in charge of Google Colab. They can't support the usage growth. You can still use any AI that use remote GUI (like SD) in paid plan using your limited compute units.
yes there is https://github.com/Engineer-of-Stuff/stable-diffusion-paperspace
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
im running mine on paperspace. 8 dollars per month and most of the time you can use free RTX4000 GPUs. im using the ipynb from here: https://github.com/Engineer-of-Stuff/stable-diffusion-paperspace
- Paperspace - A free alternative to Google Colab
- A free alternative to Colab - Paperspace
-
So… How and What can I do with Dreamstudio, exactly?
You can't use DreamStudio to train your models, and I don't think there are any other apps available for training. DreamStudio is one of the cheapest options for image generation, at 10 USD for about 5000 generations. But it has limited models, and no sampling method selection. I haven't tried any other paid apps for SD. What I use is a PaperSpace notebook, with a Pro subscription. This guide will help you set up SD image generation/training using AUTOMATIC1111 on PaperSpace. Here is the ui-config.json that I use. And a list of models, for image generation.
-
Anyone using paperspace, how come my automatic1111 webui crashes all the time?
I have a notebook I downloaded from here https://github.com/Engineer-of-Stuff/stable-diffusion-paperspace that works pretty well and can run automatic1111. Except the only problem is that it crashes like maybe once every 10 minutes.
- Other Sites to Rent GPU Other then HF and Collab?
-
Is Textual Inversion already supported in Automatic1111 SD 2.1?
Go here, and click on the code button and download the ZIP file. Unzip it and you'll find it in there.
What are some alternatives?
stable-diffusion-webui-directml - Stable Diffusion web UI
ml-stable-diffusion - Stable Diffusion with Core ML on Apple Silicon
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
SHA256-WebGPU - Implementation of sha256 in WGSL
stable-diffusion-webui - Stable Diffusion web UI
wgpu-py - Next generation GPU API for Python
stable-diffusion-papers
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
js-promise-integration - JavaScript Promise Integration
whisper.cpp - Port of OpenAI's Whisper model in C/C++
web-ai - Run modern deep learning models in the browser.