Why doesn't Rust care more about compiler performance?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io
featured
  1. Cargo

    The Rust package manager

    That work is being tracked in https://github.com/rust-lang/cargo/issues/5931

    Someone has taken up the work on this though there are some foundational steps first.

    1. We need to delineate intermediate and final build artifacts so people have a clearer understanding in `target/` what has stability guarantees (implemented, awaiting stabilization).

    2. We then need to re-organize the target directory from being organized by file type to being organized by crate instance.

    3. We need to re-do the file locking for `target/` so when we share things, one cargo process won't lock out your entire system

    4. We can then start exploring moving intermediate artifacts into a central location.

    There are some caveats to this initial implementation

    - To avoid cache poisoning, this will only items with immutable source that and an idempotent build, leaving out your local source and stuff that depends on build scripts and proc-macros. There is work to reduce the reliance on build scripts and proc-macros. We may also need a "trust me, this is idempotent" flag for some remaining cases.

    - A new instance of a crate will be created in the cache if any dependency changes versions, reducing reuse. This becomes worse when foundation crates release frequently and when adding or updating a specific dependency, Cargo prefers to keep all existing versions, creating a very unpredictable dependency tree. Support for remote caches, especially if you can use your project's CI as a cache source, would help a lot with this.

  2. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  3. Buildkite

    The Buildkite Agent is an open-source toolkit written in Go for securely running build jobs on any device or network (by buildkite)

    > That doesn't sound likely. I would expect seconds unless something very odd is happing.

    And yet here we are.

    There are plenty of stories like this floating around of degenerate cases of small projects. Here's [0] one example with numbers and how they solved it. There are enough of these issues that by getting bogged down in "well technically it's not Rust's fault, it's LLVM's single threadedness causing the slowdown here" ignores the point - Rust (very fairly) has a rep for being dog slow to compile even compared to large C++ projects

    > For the rest of the workspace, 60k of rust builds in 60 seconds

    That's... not fast.

    https://github.com/buildkite/agent is 40k lines of go according to cloc, and running `go build` including pulling dependencies takes 40 seconds. Without pulling dependencies it's 2 seconds. _That's_ fast.

    [0] https://www.feldera.com/blog/cutting-down-rust-compile-times...

  4. symbolicator

    Native Symbolication as a Service

    I work on large c++ code bases day in day out - think 30 minute compiles on an i9 with 128GB ram and NVMe drives.

    Rusts compile times are still ungodly slow. I contributed to a “small to medium” open source project [0] a while back, fixing a few issues that we came across when using it. Given that the project is approximately 3 orders of magnitude smaller than my day to day project, a clean build of a few thousand lines of rust took close to 10 minutes. Incremental changes to the project were still closer to a minute at the time. I’ve never worked on a 5m+ LOC project in rust, but I can only imagine how long it would take.

    On the flip side, I also submitted some patches to a golang program of a similar size [1] and it was faster to clone, install dependencies and clean build that project than a single file change to the rust project was.

    [0] https://github.com/getsentry/symbolicator

    [1] https://github.com/buildkite/agent

  5. rustc_codegen_cranelift

    Cranelift based backend for rustc

    > I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.

    Rust is in the process of building out the cranelift backend. Cranelift was originally built to be a JIT compiler. The hope is that this can become the debug build compiler.

    https://github.com/rust-lang/rustc_codegen_cranelift

  6. yaksplained

    All the SerenityOS Yaks, explained

    Coincidentally, I discovered this glorious page literally five minutes ago:

    https://github.com/SerenityOS/yaksplained?tab=readme-ov-file...

  7. tokio

    A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

    docs.rs is just barely viable because it only has to build crates once (for one set of features, one target platform etc.).

    What you propose would 1) have to build each create for at least the 8 Tier 1 targets, if not also the 91 Tier 2 targets. That would be either 8 or 99 binaries already.

    Then consider that it's difficult to anticipate which feature combinations a user will need. For example, the tokio crate has 14 features [1]. Any combination of 14 different features gives 2^14 = 16384 possible configurations that would all need to be built. Now to be fair, these feature choices are not completely independent, e.g. the "full" feature selects a bunch of other features. Taking these options out, I'm guessing that we will end up with (ballpark) 5000 reasonable configurations. Multiply that by the number of build targets, and we will need to build either 40000 (Tier 1 only) or 495000 binaries for just this one crate.

    Now consider on top that the interface of dependency crates can change between versions, so the tokio crate would either have to pin exact dependency versions (which would be DLL hell and therefore version locking is not commonly used for Rust libraries) or otherwise we need to build the tokio crate separately for each dependency version change that is ABI-incompatible somewhere. But even without that, storing tens of thousands of compiled variants is very clearly untenable.

    Rust has very clearly chosen the path of "pay only for what you use", which is why all these library features exist in the first place. But because they do, offering prebuilt artifacts is not viable at scale.

    [1] https://github.com/tokio-rs/tokio/blob/master/tokio/Cargo.to...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Tilde, My LLVM Alternative

    6 projects | news.ycombinator.com | 21 Jan 2025
  • Introducing our Next-Generation JavaScript SDK

    4 projects | dev.to | 25 Nov 2024
  • What part of Rust compilation is the bottleneck?

    1 project | news.ycombinator.com | 16 Mar 2024
  • Rust is now officially supported on some Infineon microcontrollers! (more to come later this year)

    1 project | /r/rust | 8 Mar 2023
  • Replacing LLVM for Rust: Cranelift based back end for rustc

    1 project | news.ycombinator.com | 22 Sep 2022