web-stable-diffusion
gpuweb
web-stable-diffusion | gpuweb | |
---|---|---|
21 | 57 | |
3,455 | 4,600 | |
1.6% | 1.2% | |
4.4 | 9.1 | |
about 2 months ago | about 4 hours ago | |
Jupyter Notebook | Bikeshed | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
web-stable-diffusion
-
GPU-Accelerated LLM on a $100 Orange Pi
Yup, here's their web stable diffusion repo: https://github.com/mlc-ai/web-stable-diffusion
The input is a model (weights + runtime lib) compiled via the mlc-llm project: https://mlc.ai/mlc-llm/docs/compilation/compile_models.html
-
StableDiffusion can now run directly in the browser on WebGPU
The MLC team got that working back in March: https://github.com/mlc-ai/web-stable-diffusion
Even more impressively, they followed up with support for several Large Language Models: https://webllm.mlc.ai/
- Web StableDiffusion
-
[Stable Diffusion] Diffusion stable Web: exécution de diffusion stable directement dans le navigateur sans serveur GPU
[https://github.com/mlc-ai/web-stable-diffusion
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
You can run it directly in the browser with WebGPU, https://mlc.ai/web-stable-diffusion/
-
I've got Stable Diffusion integrated into my site now, fully client side with no setup or servers.
Using the amazing work of https://mlc.ai/web-stable-diffusion/ I've got the code moved into a Web Worker and running fully local client side. It does require 2GB's of model files be downloaded (automatically), and takes a few minutes for the first load, but it works and once it's going it only takes 20s to make a 512x512 image.
-
Chrome Ships WebGPU
The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].
Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.
[1] https://github.com/mlc-ai/web-stable-diffusion
- Bringing stable diffusion models to web browsers
- mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
- Web Stable Diffusion: Running Diffusion Models with WebGPU
gpuweb
-
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
Works for me.
WebGPU support is behind a couple flags on Linux: https://github.com/gpuweb/gpuweb/wiki/Implementation-Status
- WGSL Is Terrible
-
WebGPU now available for testing in Safari Technology Preview
People keep spreading this incredibly misleading statement, and yours is even more misleading (suggesting Apple opposed a 'GPU WASM')
By all accounts, Apple's /only/ stance was that if WebGPU used SPIR-V it would be a non-starter for them, due to ongoing legal issues between Apple and Khronos.
Apple actually proposed WebHLSL in collaboration with Microsoft, to have HLSL be the standard.
Mozilla employee's stance[0] was that SPIRV was too low level, did not fit with the goals of WebGPU portability and security, and expressed concern that Khronos may add functionality to SPIRV they cannot support in WebGPU like raytracing instructions .. 'So we'd always be on the verge of forking SPIR-V in some way.'
It was also noted by many people that even if a bytecode format was used, it would still have to be translated to the target (HLSL/DXIL, MSL, etc.) in almost the same way a text format would.
Nobody proposed a 'GPU WASM equivalent' or an alternative bytecode format.
The hard truth is that shader compilation is a fucking nightmare, people do not realize how bad it is across the different native APIs. SPIR-V is good, but it doesn't solve that - and presents other challenges if you are a web browser API. Vulkan and SPIRV are not the golden goose many make them out to be.
[0] https://github.com/gpuweb/gpuweb/issues/847#issuecomment-642...
-
Show HN: WebGPU Particles Simulation
Yes it is still a bit new. WebGPU is not finished and is still being worked on: https://webgpu.io/
-
Capturing the WebGPU Ecosystem
WebGPU currently doesn't support the "bindless" resource access model (see: https://github.com/gpuweb/gpuweb/issues/380).
The "max number of sampled texture per shader stage" is a runtime device limit, and the minimal value for that seems to be 16. So texture atlasses are still a thing in WebGPU.
-
Why aren't we using highly efficient int8 calcualtions in quants? (maybe eli14?)
There's even an implementation under discussion to have the dp4a instruction added to WebGPU (https://github.com/gpuweb/gpuweb/issues/2677)
- WebGPU – All of the cores, none of the canvas
- How to get Chromium working with the Vulkan driver on a RPi4?
- Anyone has Chromium WebGPU working?
- [Rust_Gamedev] WGSL est-il un bon choix?
What are some alternatives?
stable-diffusion-webui-directml - Stable Diffusion web UI
wgsl.vim - WGSL syntax highlight for vim
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly
SHA256-WebGPU - Implementation of sha256 in WGSL
noclip.website - A digital museum of video game levels
wgpu-py - Next generation GPU API for Python
BestBuy-GPU-Bot - BestBuy Bot is an Add to cart and Auto Checkout Bot. This auto buying bot can search the item repeatedly on the ITEM page using one keyword. Once the desired item is available it can add to cart and checkout very fast. This auto purchasing BestBuy Bot can work on Firefox Browser so it can run in all Operating Systems. It can run for multiple items simultaneously.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
wgpu-rs - Rust bindings to wgpu native library
js-promise-integration - JavaScript Promise Integration
WASI - WebAssembly System Interface