awesome-tensor-compilers
tvm
awesome-tensor-compilers | tvm | |
---|---|---|
9 | 15 | |
2,171 | 11,186 | |
- | 1.3% | |
4.4 | 9.9 | |
4 months ago | 4 days ago | |
Python | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-tensor-compilers
-
MatX: Faster Chips for LLMs
> So long as Pytorch only practically works with Nvidia GPUs, everything else is little more than a rounding error.
This is changing.
https://github.com/merrymercy/awesome-tensor-compilers
There are more and better projects that can compile an existing PyTorch codebase into a more optimized format for a range of devices. Triton (which is part of PyTorch) TVM and the MLIR based efforts (like torch-MLIR or IREE) are big ones, but there are smaller fish like GGML and Tinygrad, or more narrowly focused projects like Meta's AITemplate (which works on AMD datacenter GPUs).
Hardware is in a strange place now... It feels like everyone but Cerebras and AMD/Intel was squeezed out, but with all the money pouring in, I think this is temporary.
-
Run Llama2-70B in Web Browser with WebGPU Acceleration
I think this is true of AI compilation in general. Torch MLIR, AITemplate and really everything here fly under the radar.
https://github.com/merrymercy/awesome-tensor-compilers#open-...
-
Ask HN: How to get good as a self taught ML engineer?
> I really want to do some great work and help people.
Have you looked into ML compilation?
https://github.com/merrymercy/awesome-tensor-compilers
IMO there is low hanging fruit in the space between high performance ML compilers/runtimes and the actual projects people use. If you practice porting projects you use to these frameworks, that would give you a massive performance edge.
-
Ask HN: What new programming language(s) are you most excited about?
While not all "languages" persay, I am excited about the various ML compilation efforts:
https://github.com/merrymercy/awesome-tensor-compilers
Modern ML training/inference is inefficient, and lacks any portability. These frameworks are how that changes.
-
Research Papers on ML in Compilers
You might be interested in this: https://github.com/merrymercy/awesome-tensor-compilers
-
The Distributed Tensor Algebra Compiler (2022)
* collection of papers in https://github.com/merrymercy/awesome-tensor-compilers
I also have an interest in the community more widely associated with pandas/dataframes-like languages (e.g. modin/dask/ray/polars/ibis) with substrait/calcite/arrow their choice of IR
- A list of compiler projects and papers for tensor computation and deep learning
- A List of Tensor Compilers
-
C-for-Metal: High Performance SIMD Programming on Intel GPUs
Compiling from high-level lang to GPU is a huge problem, and we greatly appreciate efforts to solve it.
If I understand correctly, this (CM) allows for C-style fine-level control over a GPU device as though it were a CPU.
However, it does not appear to address data transit (critical for performance). Compilation and operator fusing to minimize transit is possibly more important. See Graphcore Poplar, Tensorflow XLA, Arrayfire, Pytorch Glow, etc.
Further, this obviously only applies to Intel GPUs, so investing time in utilizing low-level control is possibly a hardware dead-end.
Dream world for programmers is one where data transit and hardware architecture are taken into account without living inside a proprietary DSL Conversely, it is obviously against hardware manufacturers' interests to create this.
Is MLIR / LLVM going to solve this? This list has been interesting to consider:
https://github.com/merrymercy/awesome-tensor-compilers
tvm
-
Making AMD GPUs competitive for LLM inference
Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:
Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447
-
VSL; Vlang's Scientific Library
Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?
- Apache TVM
-
MLC LLM - "MLC LLM is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases."
I have tried the iPhone app. It's fast. They're using Apache TVM which should allow better use of native accelerators on different devices. Like using metal on Apple and Vulcan or CUDA or whatever instead of just running the thing on the CPU like llama.cpp.
-
ONNX Runtime merges WebGPU back end
I was going to answer the same, I find the approach of machine learning compilers that directly compile models to host and device code better than having to bring a huge runtime. There are exciting projects in this area like TVM Unity, IREE [2], or torch.export [3]
[1] https://github.com/apache/tvm/tree/unity
[2] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
[3] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
-
Esp32 tensorflow lite
Apache TVM home page: https://tvm.apache.org/
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
Run Stable Diffusion on Your M1 Mac’s GPU
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
[0] https://tvm.apache.org/
-
How to get started with machine learning.
Or use TVM, the idea is to compile your model into code that you can load at runtime. Similar to onnxruntime, it only does DNN inference; so you need domain-specific code.
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
What are some alternatives?
Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
alpa - Training and serving large-scale neural networks with auto parallelization.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
Fable - The project has moved to a separate organization. This project provides redirect for old Fable web site.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Distributed-Systems-Guide - Distributed Systems Guide
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
nebuly - The user analytics platform for LLMs
awesome-machine-learning-in-compilers - Must read research papers and links to tools and datasets that are related to using machine learning for compilers and systems optimisation
stable-diffusion