SaaSHub helps you find the best software and product alternatives Learn more →
Ml-compiler-opt Alternatives
Similar projects and alternatives to ml-compiler-opt
-
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
connectedhomeip
Matter (formerly Project CHIP) creates more connections between more objects, simplifying development for manufacturers and increasing compatibility for consumers, guided by the Connectivity Standards Alliance.
-
-
capstone
Capstone disassembly/disassembler framework for ARM, ARM64 (ARMv8), Alpha, BPF, Ethereum VM, HPPA, LoongArch, M68K, M680X, Mips, MOS65XX, PPC, RISC-V(rv32G/rv64G), SH, Sparc, SystemZ, TMS320C64X, TriCore, Webassembly, XCore and X86.
-
LOGICGUIDE
Plug in and Play implementation of "Certified Reasoning with Language Models" that elevates model reasoning by 40%
-
ml-compiler-opt discussion
ml-compiler-opt reviews and mentions
-
Large Language Models for Compiler Optimization
I did a bit of work on this last summer on (much) smaller models [1] and it was briefly discussed towards the end of last year's MLGO panel [2]. For heuristic replacements specifically, you might be able to glean some things (or just use interpretable models like decision trees), but something like a neural network works fundamentally differently than the existing heuristics, so you probably wouldn't see most of the performance gains. For just tuning heuristics, the usual practice is to make most of the parameters configurable and then use something like bayesian optimization to try and find an optimal set, and this is sometimes done as a baseline in pieces of ML-in-compiler research.
1. https://github.com/google/ml-compiler-opt/pull/109
-
How to make smaller C and C++ binaries
If you're using Clang/LLVM you can also enable ML inlining[1] (assuming you build from source) which can save up to around 7% if all goes well.
There are also talks of work on just brute forcing the inlining for size problem for embedded releases for smallish applications. It's definitely feasible if the problem is important enough to you to throw some compute at it [2].
1. https://github.com/google/ml-compiler-opt
2. https://doi.org/10.1145/3503222.3507744
-
A code optimization Ai?
LLVM's inlining-for-size and register-allocation-for-performance optimizations are both implemented using machine learning models trained by Google.
-
Google AI Proposes ‘MLGO’: A Machine Learning Guided Compiler Optimization Python Framework
Continue reading | Checkout the paper, github, demo and ref article.
-
Google ML Compiler Inlining Achieves 3-7% Reduction in Size
Looks like they do have a pretrained model:
https://github.com/google/ml-compiler-opt/releases/download/...
The code will by default auto-download it during the build process. It's about 800 kbytes, which seems very reasonable for something that will reduce the generated code size by gigabytes for a large codebase.
-
A note from our sponsor - SaaSHub
www.saashub.com | 8 Feb 2025
Stats
google/ml-compiler-opt is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of ml-compiler-opt is Python.