laskin.live
web-stable-diffusion
laskin.live | web-stable-diffusion | |
---|---|---|
2 | 21 | |
6 | 3,440 | |
- | 1.0% | |
10.0 | 4.4 | |
over 3 years ago | about 2 months ago | |
HTML | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
laskin.live
-
Chrome Ships WebGPU
For anyone figuring out how to run webgpu on a remote computer (over webrtc) , see this: https://github.com/periferia-labs/laskin.live
Not sure if it works anymore (I made it 3 years ago), but will be interesting to see if there will be similar products for LLMs and so now.
-
WebRTC.rs reached an important milestone in connectivity!
Shameless plug but a friend of mine and I combined both WebRTC and WebGPU some time ago: https://github.com/periferia-labs/laskin.live
web-stable-diffusion
-
GPU-Accelerated LLM on a $100 Orange Pi
Yup, here's their web stable diffusion repo: https://github.com/mlc-ai/web-stable-diffusion
The input is a model (weights + runtime lib) compiled via the mlc-llm project: https://mlc.ai/mlc-llm/docs/compilation/compile_models.html
-
StableDiffusion can now run directly in the browser on WebGPU
The MLC team got that working back in March: https://github.com/mlc-ai/web-stable-diffusion
Even more impressively, they followed up with support for several Large Language Models: https://webllm.mlc.ai/
- Web StableDiffusion
-
[Stable Diffusion] Diffusion stable Web: exécution de diffusion stable directement dans le navigateur sans serveur GPU
[https://github.com/mlc-ai/web-stable-diffusion
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
You can run it directly in the browser with WebGPU, https://mlc.ai/web-stable-diffusion/
-
I've got Stable Diffusion integrated into my site now, fully client side with no setup or servers.
Using the amazing work of https://mlc.ai/web-stable-diffusion/ I've got the code moved into a Web Worker and running fully local client side. It does require 2GB's of model files be downloaded (automatically), and takes a few minutes for the first load, but it works and once it's going it only takes 20s to make a 512x512 image.
-
Chrome Ships WebGPU
The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].
Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.
[1] https://github.com/mlc-ai/web-stable-diffusion
- Bringing stable diffusion models to web browsers
- mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
- Web Stable Diffusion: Running Diffusion Models with WebGPU
What are some alternatives?
rivi-loader - Vulkan Compute program loader in Rust
stable-diffusion-webui-directml - Stable Diffusion web UI
SHA256-WebGPU - Implementation of sha256 in WGSL
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
wgpu-mm
wgpu-py - Next generation GPU API for Python
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
js-promise-integration - JavaScript Promise Integration
whisper.cpp - Port of OpenAI's Whisper model in C/C++
web-ai - Run modern deep learning models in the browser.
memory64 - Memory with 64-bit indexes