webgpu-blas
stablehlo
Our great sponsors
webgpu-blas | stablehlo | |
---|---|---|
3 | 5 | |
97 | 333 | |
- | 10.8% | |
4.8 | 9.8 | |
3 months ago | 2 days ago | |
TypeScript | MLIR | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
webgpu-blas
-
Chrome Ships WebGPU
Looks like no -- there appears to be no tensor core or similar support and this SGEMM (fp32 matrix multiply) benchmark gets awful results (my laptop gets 330gflops on this when it should be capable of 13000 gflops).
https://github.com/milhidaka/webgpu-blas
-
Modern JavaScript:Everything you missed over the last 10 years(ECMAScript 2020)
I think you will be interested to read this article about the future of data programming in JavaScript (http://benschmidt.org/post/2020-01-15/2020-01-15-webgpu/).
I do think that this kind of thing will be able to be built on top of WebGPU (I saw this experimental POC that did so recently: https://github.com/milhidaka/webgpu-blas). The only issue is that since JavaScript doesn't support operator overloading, the code might be a little less readable.
- JavaScript for Data Science
stablehlo
-
Nvidia H200 Tensor Core GPU
I am going to paste a cousin comment:
StableHLO[1] is an interesting project that might help AMD here:
> Our goal is to simplify and accelerate ML development by creating more interoperability between various ML frameworks (such as TensorFlow, JAX and PyTorch) and ML compilers (such as XLA and IREE).
From there, their goal would most likely be to work with XLA/OpenXLA teams on XLA[3] and IREE[2] to make RoCM a better backend.
[1] https://github.com/openxla/stablehlo
[2] https://github.com/openxla/iree
[3] https://www.tensorflow.org/xla
-
Chrome Ships WebGPU
Also see the recently introduced StableHLO and its serialization format: https://github.com/openxla/stablehlo/blob/main/docs/bytecode...
-
OpenXLA Is Available Now
If you mean StableHLO, then it has an MLIR dialect: https://github.com/openxla/stablehlo/blob/main/stablehlo/dia....
In the StableHLO spec, we are talking about this in more abstract terms - "StableHLO opset" - to be able to unambiguously reason about the semantics of StableHLO programs. However, in practice the StableHLO dialect is the primary implementation of the opset at the moment.
I wrote "primary implementation" because e.g. there is also ongoing work on adding StableHLO support to the TFLite flatbuffer schema: https://github.com/tensorflow/tensorflow/blob/master/tensorf.... Having an abstract notion of the StableHLO opset enables us to have a source of truth that all the implementations correspond to.
What are some alternatives?
numjs - Like NumPy, in JavaScript
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
Material UI - Ready-to-use foundational React components, free forever. It includes Material UI, which implements Google's Material Design.
SHA256-WebGPU - Implementation of sha256 in WGSL
wgpu-mm
iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.
next-auth - Authentication for the Web.
SHARK - SHARK - High Performance Machine Learning Distribution
icpts - TypeScript implementation of iterative closest point (ICP) for point cloud registration
glare-core - C++ code used in various Glare Tech Ltd products