Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
The system linker strips debug info from the executable. The way the debugger finds debug info is that the debug info is available in the object files. A reference to the object files is added to the executable. The debugger can find the object files and read the debug info from the object files. I have no experience with Rust but in opinion this should be the default behavior for debug builds, no need to generate a dSYM file. The dSYM file is used if you want to ship debug info with your final release build to customers. There's no need for dSYM during regular development workflows.
It's possible to get the linker to keep the debug info by tweaking the section attributes. By default all debug info related sections have the `S_ATTR_DEBUG` flag. If that is replaced with the `S_REGULAR` flag the linker will keep those sections. The DMD D compiler does this for the `__debug_line` section [1][2][3]. This allows for uncaught exceptions to print a stack trace with filenames and line numbers. Of course, DMD uses a custom backend so this change was easy. Rust which relies on LLVM would probably need a fork.
[1] https://github.com/dlang/dmd/pull/8168
You can also turn off debuginfo completely. Personally, someone who does printf debugging, I mainly need it to debug segfaults, which are really rare in Rust. Sometimes the call stack of a panic is useful as well, but if I need debuginfo I can just re-enable it.
https://github.com/est31/cargo-udeps/commit/e550d93c7a6d756e...
I think because split-debuginfo is recently stabilized (Nov 2020).
https://github.com/rust-lang/rust/pull/79570
Maybe the default will be switched in the future.
There is work on integrating Cranelift into rustc (https://github.com/bjorn3/rustc_codegen_cranelift) so that rustc can compile to bytecode for the Cranelift JIT.
Compile times in rustc have been steadily improving with time, as shown here - https://arewefastyet.rs.
Every release doesn't make every workload faster, but over a long time horizon, the effect is clear. Rust 1.34 was released in April 2019 and since then many crates have become 33-50% faster to compile, depending on the hardware and the compiler mode (clean/incremental, check/debug/release).
Interestingly, the speedup mentioned in OP won't show up in these charts because that's a change on macOS and these benchmarks were recorded on Linux.
What is expected to be a gamechanger is the release of cranelift in 2021 or 2022. It's an alternate debug backend that promises much faster debug builds.
> When does the Rust compiler spend most of it's time? Is it at the checking stage?
rustc has a self-profiler that can be used to answer this question [0], as well as a mode that times each compiler pass [1].
There's no single reason the Rust compiler is slow, as it depends quite heavily on the code being compiled. For some codebases, LLVM code takes up most of the time; in other codebases (e.g., extremely generic-heavy codebases), it'll be checking-related passes.
[0]: https://github.com/rust-lang/measureme/blob/master/summarize...
[1]: https://wiki.alopex.li/WhereRustcSpendsItsTime