LOGICGUIDE
ml-compiler-opt
LOGICGUIDE | ml-compiler-opt | |
---|---|---|
2 | 7 | |
15 | 647 | |
- | 2.0% | |
6.1 | 8.7 | |
over 1 year ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LOGICGUIDE
- Large Language Models for Compiler Optimization
-
[D] Potential scammer on github stealing work of other ML researchers?
I was looking for implementation of this paper https://arxiv.org/pdf/2306.04031.pdf so I searched for logicguide github and found this repo https://github.com/kyegomez/LOGICGUIDE
ml-compiler-opt
-
Large Language Models for Compiler Optimization
I did a bit of work on this last summer on (much) smaller models [1] and it was briefly discussed towards the end of last year's MLGO panel [2]. For heuristic replacements specifically, you might be able to glean some things (or just use interpretable models like decision trees), but something like a neural network works fundamentally differently than the existing heuristics, so you probably wouldn't see most of the performance gains. For just tuning heuristics, the usual practice is to make most of the parameters configurable and then use something like bayesian optimization to try and find an optimal set, and this is sometimes done as a baseline in pieces of ML-in-compiler research.
1. https://github.com/google/ml-compiler-opt/pull/109
-
How to make smaller C and C++ binaries
If you're using Clang/LLVM you can also enable ML inlining[1] (assuming you build from source) which can save up to around 7% if all goes well.
There are also talks of work on just brute forcing the inlining for size problem for embedded releases for smallish applications. It's definitely feasible if the problem is important enough to you to throw some compute at it [2].
1. https://github.com/google/ml-compiler-opt
2. https://doi.org/10.1145/3503222.3507744
-
A code optimization Ai?
LLVM's inlining-for-size and register-allocation-for-performance optimizations are both implemented using machine learning models trained by Google.
-
Google AI Proposes ‘MLGO’: A Machine Learning Guided Compiler Optimization Python Framework
Continue reading | Checkout the paper, github, demo and ref article.
-
Google ML Compiler Inlining Achieves 3-7% Reduction in Size
Looks like they do have a pretrained model:
https://github.com/google/ml-compiler-opt/releases/download/...
The code will by default auto-download it during the build process. It's about 800 kbytes, which seems very reasonable for something that will reduce the generated code size by gigabytes for a large codebase.
What are some alternatives?
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention
capstone - Capstone disassembly/disassembler framework for ARM, ARM64 (ARMv8), Alpha, BPF, Ethereum VM, HPPA, LoongArch, M68K, M680X, Mips, MOS65XX, PPC, RISC-V(rv32G/rv64G), SH, Sparc, SystemZ, TMS320C64X, TriCore, Webassembly, XCore and X86.
certified-reasoning - Certified Reasoning with Language Models
bloaty - Bloaty: a size profiler for binaries
ToolEmu - [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
connectedhomeip - Matter (formerly Project CHIP) creates more connections between more objects, simplifying development for manufacturers and increasing compatibility for consumers, guided by the Connectivity Standards Alliance.
zeta - Build high-performance AI models with modular building blocks
Dalle3 - An API for DALLE-3
llvm-project - The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Sophia - Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.
tab-transformer-pytorch - Implementation of TabTransformer, attention network for tabular data, in Pytorch