iree
onnx
Our great sponsors
iree | onnx | |
---|---|---|
10 | 38 | |
2,379 | 16,803 | |
4.4% | 2.0% | |
10.0 | 9.5 | |
3 days ago | 9 days ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iree
-
Calyx, a Compiler Infrastructure for Accelerator Generators
How is this different than the mlir infrastructure of llvm and xla implemented in https://iree.dev/?
-
Running pre-trained ML models in Godot
So I have been developing this GDExtension called iree.gd. It is mission to embed IREE, another cool project that compiles and runs ML models, into Godot. It took me quite a while, but finally It has reached alpha. Hope you guys could check it out the sample.
-
Nvidia H200 Tensor Core GPU
I am going to paste a cousin comment:
StableHLO[1] is an interesting project that might help AMD here:
> Our goal is to simplify and accelerate ML development by creating more interoperability between various ML frameworks (such as TensorFlow, JAX and PyTorch) and ML compilers (such as XLA and IREE).
From there, their goal would most likely be to work with XLA/OpenXLA teams on XLA[3] and IREE[2] to make RoCM a better backend.
[1] https://github.com/openxla/stablehlo
[2] https://github.com/openxla/iree
[3] https://www.tensorflow.org/xla
-
Nvidia reveals new A.I. chip, says costs of running LLMs will drop significantly
I want to promote that the Google project https://github.com/openxla/iree exists and IREE acts as a way to turn Tensorflow, Pytorch, and MLIR workflows to compute on cpu, vulkan compute, cuda, rocm, metal and others.
https://github.com/RechieKho/IREE.gd -- RechieKho and I collaborate on making this work for Godot Engine, but IREE.gd is at a proof of concept stage.
-
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.
[1]: https://kompute.cc/
[2]: https://github.com/webonnx/wonnx
[3]: https://github.com/openxla/iree
-
Requiem for Piet-GPU-Hal
In the ML section you mentioned Kompute and MediaPipe. Have you seen IREE? It has a Vulkan-like compute-only HAL. https://github.com/iree-org/iree
-
PyTorch on Apple M1 Faster Than TensorFlow-Metal
Exactly the kind of things we've been talking about! A fun and challenging tradeoff space and it's always great to connect with others!
Ahh linebender - I hadn't connected the name with your github account - piet-gpu is great, as is your blog! Also, for anyone skimming the comments this talk is fantastic and I share it with anyone new to the GPGPU space: https://www.youtube.com/watch?v=DZRn_jNZjbw
We waffled a bit with the API granularity in the beginning and it's taken building out most of the rest of the project in order to nail it down (the big refactor still pending). The biggest issue is that in simple models we'll end up emitting a single command buffer but anything with control flow (that we can't predicate), data dependencies (sparsity, thresholding, etc), or CPU work in the middle (IO, custom user code, etc) can break that up. We also hit cases where we need to flush work - such as if we run out of usable memory and need to defragment or resize our pools. We want to be able to (but aren't yet) reuse command buffers (CUDA graphs, etc) and that requires being able to both cache them and recreate them on demand (if we resize a pool we have to invalidate all cached command buffers using those resources, as update-after-bind is not universally available and if shapes change there's big ripples). Since most models beyond simple vision ones are ~thousands of dispatches it also lets us better integrate into multithreaded applications like you mention as apps can record commands for themselves in parallel without synchronization. It still would be nice to have certain operations inlined, though, and for that we want to allow custom hooks that we call into to add commands to the command buffers, turning things inside-out to make small amounts of work like image transformations in-between model layers possible (I'm really hoping we can avoid modeling the entire graphics pipeline in the compiler and this would be a way around that :). We haven't yet started on scheduling across queues but that's also very interesting especially in multi-GPU cases (with x4/x8 GPUs being common in datacenters, or NUMA CPU clusters that can be scheduled similarly).
We're fully open source (https://github.com/google/iree) but have been operating quietly while we get the groundwork in place - it's taken some time but now we're finally starting to stumble into success on certain problem categories (like transformers as in the post). Right now it's mostly just organized as a systems/compiler nerd honeypot for people looking for an ML/number crunching framework that (purposefully) doesn't look like any of the existing ones :)
Would love to chat more - even if just to commiserate over GPU APIs and such - everyone is welcome on the discord where a bunch of us nerds have gathered or we could grab virtual coffee (realized just now that this hn acct is ancient - I'm [email protected] :)
-
WONNX: Deep Learning on WebGPU using the ONNX format.
If you're interested in really pushing yourself, perhaps you can look at https://github.com/google/iree?
-
GPU computing on Apple Silicon
This doesn't answer your question, but it would be cool if we had something based on MLIR for GPU compute. From what I've read, it closes the gap between NVIDIA and other GPU vendors a lot more than pure compute shaders. e.g. ONNX-MLIR, PlaidML, and IREE.
onnx
- Onyx, a new programming language powered by WebAssembly
-
From Lab to Live: Implementing Open-Source AI Models for Real-Time Unsupervised Anomaly Detection in Images
Once your model has been trained and validated using Anomalib, the next step is to prepare it for real-time implementation. This is where ONNX (Open Neural Network Exchange) or OpenVINO (Open Visual Inference and Neural network Optimization) comes into play.
-
Object detection with ONNX, Pipeless and a YOLO model
ONNX is an open format from the Linux Foundation to represent machine learning models. It is becoming extensively adopted by the Machine Learning community and is compatible with most of the machine learning frameworks like PyTorch, TensorFlow, etc. Converting a model between any of those formats and ONNX is really simple and can be done in most cases with a single command.
-
38TB of data accidentally exposed by Microsoft AI researchers
ONNX[0], model-as-protosbufs, continuing to gain adoption will hopefully solve this issue.
[0] https://github.com/onnx/onnx
-
Reddit’s LLM text model for Ads Safety
Running inference for large models on CPU is not a new problem and fortunately there has been great development in many different optimization frameworks for speeding up matrix and tensor computations on CPU. We explored multiple optimization frameworks and methods to improve latency, namely TorchScript, BetterTransformer and ONNX.
-
Operationalize TensorFlow Models With ML.NET
ONNX is a format for representing machine learning models in a portable way. Additionally, ONNX models can be easily optimized and thus become smaller and faster.
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
I would say onnx.ai [0] provides more information about ONNX for those who aren’t working with ML/DL.
[0] https://onnx.ai
-
Does ONNX Runtime not support Double/float64?
It's not clear why you thing this sub is appropriate for some third party system with a Python interface. Why don't you try their discussion group: https://github.com/onnx/onnx/discussions
-
Async behaviour in python web frameworks
This kind of indirection through standardisation is pretty common to make compatibility between different kinds of software components easier. Some other good examples are the LSP project from Microsoft and ONNX to represent machine learning models. The first provides a standard so that IDEs don't have to re-invent the weel for every programming language. The latter decouples training frameworks from inference frameworks. Going back to WSGI, you can find a pretty extensive rationale for the WSGI standard here if interested.
- Pickle safety in Python
What are some alternatives?
onnx-mlir - Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
torch-mlir - The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
stable-diffusion-webui - Stable Diffusion web UI
cutlass - CUDA Templates for Linear Algebra Subroutines
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
plaidml - PlaidML is a framework for making deep learning work everywhere.
stable-diffusion - A latent text-to-image diffusion model
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]