Our great sponsors
-
> the default automated Rust formating tool is very eager to adds lot of lines by basically keeping only one word per line.
This is not my experience.
Lifetime and '&mut self' noise (and four-space indentation) did cause rustfmt to sometimes split function signatures across multiple lines, but overall, I think rustfmt did a good job.
C++: https://github.com/quick-lint/cpp-vs-rust/blob/f8d31341f5cac...
lexer::parsed_identifier lexer::parse_identifier(const char8* input,
-
> Rust borrow checking is quick, as shown by cargo check.
Looks like monomorphization part is not caught during cargo check according to https://github.com/rust-lang/rust/issues/49292
Reporter of the bug says:
> All of those happens somewhere inside librustc_mir, most of them being monomorphization. This corresponds to translation item collection pass, which takes nontrivial amount of time.
Which essentially means that cargo check is not doing all the checks that cargo build will do, so the comparison seems to be a bit off, at least for the time being. And consequently this inconsistency can easily lead to the hypothesis of LLVM backend being the bottleneck.
I guess the only reasonable way to know for sure where are the biggest bottlenecks in build times is to have something similar to clang's -ftime-trace but I couldn't find anything similar existing in Rust.
From what I understand, monomorphization and rust macros are essentially C++ templates in a nutshell, and probably less than, yet C++ is compiled much faster. Given that both clang and rust share the same LLVM backend, this seems like an indication to me that the bottleneck is rather in the frontend and not in the backend. It also could be that Rust frontend pipeline is not quite optimized yet so that it puts more pressure to LLVM backend than what the clang does but seems like we can't really know that for sure.
-
SonarLint
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
-
-
A surprising source of slow compile times can be declarative macros in Rust [0].
I believe the core of the problem is that it has to reparse the code to pattern match for the macro.
One egregious patter is tt-munchers [1] where your macro is implemented recursively, requiring it to reparse the source on each call [2].
In one of my projects, someone decided to wrap a lot of core functions in simple macros (ie nt tt-munchers) to simplify the signatures. Unlike most macros which are used occasionally and have small inputs, this was a lot of input. When I refactored the code, I suspect dropping the macros is the reason CI times were cut in half and a clean `cargo check` went from 3s to 0.5s.
[0]: https://nnethercote.github.io/2022/04/12/how-to-speed-up-the...
[1]: https://veykril.github.io/tlborm/decl-macros/patterns/tt-mun...
[2]: https://github.com/dtolnay/quote/blob/31c3be473d0457e29c4f47...
-
Well, for languages that are now touting interop with C++, and the emergence of a mainstream memory safe language like Rust, Carbon seems like a retrograde step to me.
Val is probably the next language that fleeing C++ developers should get behind.
There's even science behind it: https://www.jot.fm/issues/issue_2022_02/article2.pdf
-
-
Would love to hear your opinion about V too especially since one of its main selling points is fast build times:
https://vlang.io/#:~:text=Small%20and%20easy%20to%20build%20...
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.