web-stable-diffusion
memory64
web-stable-diffusion | memory64 | |
---|---|---|
21 | 7 | |
3,455 | 179 | |
1.6% | 2.2% | |
4.4 | 8.5 | |
about 2 months ago | 7 days ago | |
Jupyter Notebook | WebAssembly | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
web-stable-diffusion
-
GPU-Accelerated LLM on a $100 Orange Pi
Yup, here's their web stable diffusion repo: https://github.com/mlc-ai/web-stable-diffusion
The input is a model (weights + runtime lib) compiled via the mlc-llm project: https://mlc.ai/mlc-llm/docs/compilation/compile_models.html
-
StableDiffusion can now run directly in the browser on WebGPU
The MLC team got that working back in March: https://github.com/mlc-ai/web-stable-diffusion
Even more impressively, they followed up with support for several Large Language Models: https://webllm.mlc.ai/
- Web StableDiffusion
-
[Stable Diffusion] Diffusion stable Web: exécution de diffusion stable directement dans le navigateur sans serveur GPU
[https://github.com/mlc-ai/web-stable-diffusion
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
You can run it directly in the browser with WebGPU, https://mlc.ai/web-stable-diffusion/
-
I've got Stable Diffusion integrated into my site now, fully client side with no setup or servers.
Using the amazing work of https://mlc.ai/web-stable-diffusion/ I've got the code moved into a Web Worker and running fully local client side. It does require 2GB's of model files be downloaded (automatically), and takes a few minutes for the first load, but it works and once it's going it only takes 20s to make a 512x512 image.
-
Chrome Ships WebGPU
The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].
Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.
[1] https://github.com/mlc-ai/web-stable-diffusion
- Bringing stable diffusion models to web browsers
- mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
- Web Stable Diffusion: Running Diffusion Models with WebGPU
memory64
-
Top 8 Recent V8 Updates
A completed implementation of memory64 for memory-hungry applications.
-
Extism Makes WebAssembly Easy
Indeed, webassembly is moving extremely slowly. I started a project years ago expecting https://github.com/WebAssembly/memory-control/blob/main/prop... and https://github.com/WebAssembly/memory64 to be fixed at some point. Neither are yet, and the project still suffers from it to this day.
I think wasm is still great without these fixes, but I have lost confidence in the idea that wasm will reach its full potential any time soon.
-
How Photoshop solved working with files larger than can fit into memory
It's in the works: https://github.com/WebAssembly/memory64
Starting with 32bit had some performance advantages because 64bit runtimes can use virtual memory shenanigans to implement bounds checking with zero overhead. In wasm64 they'll have to do explicit bounds checking instead.
-
Transformers.js
Right - currently, everything runs using WASM (32-bit, with 64-bit coming soon [1,2]), and I plan to add support for WebGPU soon!
(WebGPU is the successor to WebGL, which is coming out in April 2023 [3])
[1] https://github.com/WebAssembly/memory64/issues/36#issuecomme...
-
What was the rational for 32-bit memory addresses in WebAssembly? It seems very short-sighted, considering it only came out pretty recently in 2017
It shouldn't be a big surprise that a 64-bit pointer extension is out there and being worked on. The great thing about a VM is you can integrate major changes like this when they are needed and with the benefit of experience and hindsight. If the 4GB limit turns out to be restrictive then it can be lifted.
- Why Am I Excited About WebAssembly?
-
Increasing Smart Contract Canister Memory Proposal is live for review
The goal of this proposal is to increase the amount of memory that canisters can access [eventually] bound only by the actual capacity of the subnet. Since, the Memory64 proposal is not standardized 1 yet and its implementation 1 in Wasmtime is not production ready yet, this proposal enables the increase by introducing a new stable memory API.
What are some alternatives?
stable-diffusion-webui-directml - Stable Diffusion web UI
interface-types
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
wasmtime - A fast and secure runtime for WebAssembly
SHA256-WebGPU - Implementation of sha256 in WGSL
botnet - Multiplayer programming game using Rust and WebAssembly
wgpu-py - Next generation GPU API for Python
temporal-polyfill - A lightweight polyfill for Temporal, successor to the JavaScript Date object
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
proposal-temporal - Provides standard objects and functions for working with dates and times.
js-promise-integration - JavaScript Promise Integration
component-sandbox-demo