Chrome Ships WebGPU

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • burn

    Discontinued Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. [Moved to: https://github.com/Tracel-AI/burn] (by burn-rs)

  • I'm presently working on enhancing Burn's (https://burn-rs.github.io/) capabilities by implementing ONNX model importation (https://github.com/burn-rs/burn/issues/204). This will enable users to generate model source code during build time and load weights at runtime.

    In my opinion, ONNX is more complex than necessary. Therefore, I opted to convert it to an intermediate representation (IR) first, which is then used to generate source code. A key advantage of this approach is the ease of merging nodes into corresponding operations, since ONNX and Burn don't share the same set of operators.

  • wonnx

    A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web

  • This makes running larger machine learning models in the browser feasible - see e.g. https://github.com/webonnx/wonnx (I believe Microsoft's ONNXRuntime.js will also soon gain a WebGPU back-end).

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • tfjs

    A WebGL accelerated JavaScript library for training and deploying ML models.

  • People have been doing it for long with WebGL, see eg https://github.com/tensorflow/tfjs and https://cloudblogs.microsoft.com/opensource/2021/09/02/onnx-...

  • SHA256-WebGPU

    Implementation of sha256 in WGSL

  • godot-proposals

    Godot Improvement Proposals (GIPs)

  • Nice! I have linked your site to the Godot WebGPU support proposal issue: https://github.com/godotengine/godot-proposals/issues/6646

  • webgpu-blas

    Fast matrix-matrix multiplication on web browser using WebGPU

  • Looks like no -- there appears to be no tensor core or similar support and this SGEMM (fp32 matrix multiply) benchmark gets awful results (my laptop gets 330gflops on this when it should be capable of 13000 gflops).

    https://github.com/milhidaka/webgpu-blas

  • wgpu-mm

  • This is very exciting! (I had suspected it would slip to 114)

    WebGPU implementations are still pretty immature, but certainly enough to get started with. I've been implementing a Rust + WebGPU ML runtime for the past few months and have enjoyed writing WGSL.

    I recently got a 250M parameter LLM running in the browser without much optimisation and it performs pretty well! (https://twitter.com/fleetwood___/status/1638469392794091520)

    That said, matmuls are still pretty handicapped in the browser (especially considering the bounds checking enforced in the browser). From my benchmarking I've struggled to hit 50% of theoretical FLOPS, which is cut down to 30% when the bounds checking comes in. (Benchmarks here: https://github.com/FL33TW00D/wgpu-mm)

    I look forward to accessing shader cores as they mentioned in the post.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • mach

    zig game engine & graphics toolkit

  • This is very welcome and a long time coming!

    If you're eager to learn WebGPU, consider checking out Mach[0] which lets you develop with it natively using Zig today very easily. We aim to be a competitor-in-spirit to Unity/Unreal/Godot, but extremely modular. As part of that we have Mach core which just provides Window+Input+WebGPU and some ~19 standalone WebGPU examples[1].

    Currently we only support native desktop platforms; but we're working towards browser support. WebGPU is very nice because it lets us target desktop+wasm+mobile for truly-cross-platform games & native applications.

    [0] https://github.com/hexops/mach

    [1] https://github.com/hexops/mach-examples/tree/main/core

  • laskin.live

    An online calculator, but you can only use it on your remote friend’s GPU

  • For anyone figuring out how to run webgpu on a remote computer (over webrtc) , see this: https://github.com/periferia-labs/laskin.live

    Not sure if it works anymore (I made it 3 years ago), but will be interesting to see if there will be similar products for LLMs and so now.

  • web-stable-diffusion

    Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.

  • The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].

    Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.

    [1] https://github.com/mlc-ai/web-stable-diffusion

  • wgpu-py

    Next generation GPU API for Python

  • FYI you can already use webgpu directly in python, see https://github.com/pygfx/wgpu-py for webgpu wrappers and https://github.com/pygfx/pygfx for a more high level graphics library

  • pygfx

    A python render engine running on wgpu.

  • FYI you can already use webgpu directly in python, see https://github.com/pygfx/wgpu-py for webgpu wrappers and https://github.com/pygfx/pygfx for a more high level graphics library

  • gpuweb

    Where the GPU for the Web work happens!

  • This is a huge milestone. It's also part of a much larger journey. In my work on developing Vello, an advanced 2D renderer, I have come to believe WebGPU is a game changer. We're going to have reasonably modern infrastructure that runs everywhere: web, Windows, mac, Linux, ChromeOS, iOS, and Android. You're going to see textbooks(), tutorials, benchmark suites, tons of sample code and projects to learn from.

    WebGPU 1.0 is a lowest common denominator product. As 'FL33TW00D points out, matrix multiplication performance is much lower than you'd hope from native. However, it is possible* to run machine learning workloads, and getting that performance back is merely an engineering challenge. A few extensions are needed, in particular cooperative matrix multiply (also known as tensor cores, WMMA, or simd_matrix). That in turn depends on subgroups, which have some complex portability concerns[1].

    Bindless is another thing everybody wants. The wgpu team is working on a native extension[2], which will inform web standardization as well. I am confident this will happen.

    The future looks bright. If you are learning GPU, I now highly recommend WebGPU, as it lets you learn modern techniques (including compute), and those skills will transfer to native APIs including Vulkan, Metal, and D3D12.

    Disclosure: I work at Google and have been involved in WebGPU development, but on a different team and as one who has been quite critical of aspects of WebGPU.

    (*): If you're writing a serious, high quality textbook on compute with WebGPU, then I will collaborate on a chapter on prefix sums / scan.

    [1]: https://github.com/gpuweb/gpuweb/issues/3950

    [2]: https://docs.rs/wgpu/latest/wgpu/struct.Features.html#associ...

  • stablehlo

    Backward compatible ML compute opset inspired by HLO/MHLO

  • Also see the recently introduced StableHLO and its serialization format: https://github.com/openxla/stablehlo/blob/main/docs/bytecode...

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts