tinygrad
wonnx
tinygrad | wonnx | |
---|---|---|
17 | 18 | |
24,018 | 1,493 | |
3.3% | 4.6% | |
10.0 | 6.3 | |
6 days ago | about 1 month ago | |
Python | Rust | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tinygrad
-
AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs for Desktop with Ryzen AI
Not sure if I completely understand what "Ryzen AI" does, but Tinygrad for example has some limited support for RDNA3[0]. It isn't quite there yet in matters of performance though, as you can read in the comments of that file.
There's also a small tutorial by AMD on how to use the WMMA intrinsic[1] using AMD's hipcc[2] compiler. Documentation is sparse kinda sparse, but the instruction set is not huge. The RDNA3 ISA guide[3] might also be helpful (and only a fraction of the pages are relevant.)
0. https://github.com/tinygrad/tinygrad/blob/master/extra/gemm/...
1. https://gpuopen.com/learn/wmma_on_rdna3/
2. https://github.com/ROCm/HIPCC
3. https://www.amd.com/content/dam/amd/en/documents/radeon-tech...
- Tinygrad 0.8.0 Release
-
Beyond Backpropagation - Higher Order, Forward and Reverse-mode Automatic Differentiation for Tensorken
This post describes how I added automatic differentiation to Tensorken. Tensorken is my attempt to build a fully featured yet easy-to-understand and hackable implementation of a deep learning library in Rust. It takes inspiration from the likes of PyTorch, Tinygrad, and JAX.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
what do you think about tinygrad? I think its a good example of growing and well written, (partially) well documented library with many close to reference implementations
-
AMD MI300 Performance โ Faster Than H100, but How Much?
The idea of model architecture making fast hardware design easier is what makes https://github.com/tinygrad/tinygrad so interesting.
-
๐ป 7 Open-Source DevTools That Save Time You Didn't Know to Exist โ๐
๐ Support on GitHub Website: https://tinygrad.org/
- Tinygrad
-
How to train an Iris dataset classifier with Tinygrad
Before we begin, make sure you have TinyGrad and the required dependencies installed. You can find the installation instructions here.
-
Decomposing Language Models into Understandable Components
Try to get something like tinygrad[1] running locally, that way you can tweak things a bit run it again and see how it performs. While doing this you'll pick up most of the concepts and get a feeling of how things work. Also, take a look at projects like llama.cpp[2], you don't have to fully understand what's going on here, tho.
You may need some intermediate knowledge of linear algebra and this thing called "data science" nowadays, which is pretty much knowing how to mangle data and visualize it.
Try creating a small model on your own, it doesn't have to be super fancy just make sure it does something you want it to do. And then ... you'll probably could go on your own then.
1: https://github.com/tinygrad/tinygrad
2: https://github.com/ggerganov/llama.cpp
- Tinygrad 0.7.0
wonnx
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
The two I know of are IREE and Kompute[1]. I'm not sure how much momentum the latter has, I don't see it referenced much. There's also a growing body of work that uses Vulkan indirectly through WebGPU. This is currently lagging in performance due to lack of subgroups and cooperative matrix mult, but I see that gap closing. There I think wonnx[2] has the most momentum, but I am aware of other efforts.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
-
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
[3]: https://github.com/openxla/iree
-
Onnx Runtime: โCross-Platform Accelerated Machine Learningโ
There's also a third-party WebGPU implementation: https://github.com/webonnx/wonnx
-
Are there any ML crates that would compile to WASM?
By experimental I meant e.g. using WGPU to run compute shaders like wonnx, which is working fine but only on a very restricted set of devices and browsers.
- WebGPU ONNX inference runtime written in Rust
-
PyTorch Primitives in WebGPU for the Browser
https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...
microsoft/onnxruntime: https://github.com/microsoft/onnxruntime
Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?
FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.
What's a fair benchmark?
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust ๐ฆ๐๐ฆ
wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
-
I want to talk about WebGPU
> GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?
Have a look at wonnix https://github.com/webonnx/wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
Chrome Ships WebGPU
Looking forward to your WebGPU ML runtime! Also, why not contribute back to WONNX? (https://github.com/webonnx/wonnx)
-
OpenXLA Is Available Now
You can indeed perform inference using WebGPU (see e.g. [1] for GPU-accelerated inference of ONNX models on WebGPU; I am one of the authors).
The point made above is that WebGPU can only be used for GPU's and not really for other types of 'neural accelerators' (like e.g. the ANE on Apple devices).
[1] https://github.com/webonnx/wonnx
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
stablehlo - Backward compatible ML compute opset inspired by HLO/MHLO
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
onnx - Open standard for machine learning interoperability
llama.cpp - LLM inference in C/C++
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
llama - Inference code for Llama models
iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
tensorflow_macos - TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
blaze - A Rustified OpenCL Experience