-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
sccache
Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible. Sccache has the capability to utilize caching in remote storage environments, including various cloud storage options, or alternatively, in local storage.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
After reading all documentation and blog posts about making our CI faster, adding cargo chef, docker layers, gitlab runner caches on S3 buckets to save `target` and `cargo` directories I still have random rebuilds when using `cargo clippy --color always -- -D warnings` which makes it go from 1mn to 5mn build time when triggered. I've also tried using `CARGO_LOG: cargo::core::compiler::fingerprint=trace` to understand why cargo wants to update crates.io index without any luck.
As for avoiding cargo rebuilding artifacts, make sure to use the same docker image, the same target dir and same workspace dir, every build. If you're using kaniko, it also does not preserve file timestamps (#1894) causing rebuilds.
IIRC the single most effective thing for my CI setup was using mold as a linker. There are good resources on the web on how to set that up. This is what I did for my docker builds.
I was able to use sccache with digitalocean, the S3 errors I had is this one https://github.com/mozilla/sccache/issues/633. It seems they have a PR coming using the default AWS S3 Rust SDK which would fix that. But as you're saying sccache is only for compiles.
Something I'd like to point out, I was using Scaleway but their S3 was too slow. I switched to DigitalOcean after I benchmarked their S3 because it's cheap and super fast. It improved my building speed but I was still using gitlab registry, and starting new CI job would fetch those massive builder image (3GB compressed). I then switched to digitalocean registry to be "closer" from my runners using digitalocean droplets, and it massively improved the time to start new job when the docker image wasn't locally cached yet. I did a traceroute from my droplet to gitlab, it was only a few hop but it still improved a lot... So I guess you should make sure your docker registry, your S3 bucket, and your runners are all running on the same network.