capstone VS ml-compiler-opt

Compare capstone vs ml-compiler-opt and see what are their differences.

capstone

Capstone disassembly/disassembler framework for ARM, ARM64 (ARMv8), BPF, Ethereum VM, M68K, M680X, Mips, MOS65XX, PPC, RISC-V(rv32G/rv64G), SH, Sparc, SystemZ, TMS320C64X, TriCore, Webassembly, XCore and X86. (by capstone-engine)

ml-compiler-opt

Infrastructure for Machine Learning Guided Optimization (MLGO) in LLVM. (by google)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
capstone ml-compiler-opt
7 7
7,025 583
1.7% 2.7%
9.0 7.9
6 days ago 14 days ago
C Python
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

capstone

Posts with mentions or reviews of capstone. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-31.

ml-compiler-opt

Posts with mentions or reviews of ml-compiler-opt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-17.
  • Large Language Models for Compiler Optimization
    3 projects | news.ycombinator.com | 17 Sep 2023
    I did a bit of work on this last summer on (much) smaller models [1] and it was briefly discussed towards the end of last year's MLGO panel [2]. For heuristic replacements specifically, you might be able to glean some things (or just use interpretable models like decision trees), but something like a neural network works fundamentally differently than the existing heuristics, so you probably wouldn't see most of the performance gains. For just tuning heuristics, the usual practice is to make most of the parameters configurable and then use something like bayesian optimization to try and find an optimal set, and this is sometimes done as a baseline in pieces of ML-in-compiler research.

    1. https://github.com/google/ml-compiler-opt/pull/109

  • How to make smaller C and C++ binaries
    4 projects | news.ycombinator.com | 7 May 2023
    If you're using Clang/LLVM you can also enable ML inlining[1] (assuming you build from source) which can save up to around 7% if all goes well.

    There are also talks of work on just brute forcing the inlining for size problem for embedded releases for smallish applications. It's definitely feasible if the problem is important enough to you to throw some compute at it [2].

    1. https://github.com/google/ml-compiler-opt

    2. https://doi.org/10.1145/3503222.3507744

  • A code optimization Ai?
    1 project | /r/AskProgramming | 9 Jan 2023
    LLVM's inlining-for-size and register-allocation-for-performance optimizations are both implemented using machine learning models trained by Google.
  • Google AI Proposes ‘MLGO’: A Machine Learning Guided Compiler Optimization Python Framework
    1 project | /r/artificial | 10 Jul 2022
    Continue reading | Checkout the paper, github, demo and ref article.
  • Google ML Compiler Inlining Achieves 3-7% Reduction in Size
    4 projects | news.ycombinator.com | 6 Jul 2022
    Looks like they do have a pretrained model:

    https://github.com/google/ml-compiler-opt/releases/download/...

    The code will by default auto-download it during the build process. It's about 800 kbytes, which seems very reasonable for something that will reduce the generated code size by gigabytes for a large codebase.

What are some alternatives?

When comparing capstone and ml-compiler-opt you can also consider the following projects:

aya - Aya is an eBPF library for the Rust programming language, built with a focus on developer experience and operability.

LOGICGUIDE - Plug in and Play implementation of "Certified Reasoning with Language Models" that elevates model reasoning by 40%

Unicorn Engine - Unicorn CPU emulator framework (ARM, AArch64, M68K, Mips, Sparc, PowerPC, RiscV, S390x, TriCore, X86)

llvm-project - The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.

convis

bloaty - Bloaty: a size profiler for binaries

certified-reasoning - Certified Reasoning with Language Models

liquidator - open source version of a liquidation bot running against solend

connectedhomeip - Matter (formerly Project CHIP) creates more connections between more objects, simplifying development for manufacturers and increasing compatibility for consumers, guided by the Connectivity Standards Alliance.

Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.

qemu