test-s3-direct-upload
sccache
test-s3-direct-upload | sccache | |
---|---|---|
2 | 71 | |
7 | 5,385 | |
- | 2.0% | |
0.0 | 9.4 | |
about 2 years ago | 6 days ago | |
Ruby | Rust | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
test-s3-direct-upload
-
Faster CI builds?
Something I'd like to point out, I was using Scaleway but their S3 was too slow. I switched to DigitalOcean after I benchmarked their S3 because it's cheap and super fast. It improved my building speed but I was still using gitlab registry, and starting new CI job would fetch those massive builder image (3GB compressed). I then switched to digitalocean registry to be "closer" from my runners using digitalocean droplets, and it massively improved the time to start new job when the docker image wasn't locally cached yet. I did a traceroute from my droplet to gitlab, it was only a few hop but it still improved a lot... So I guess you should make sure your docker registry, your S3 bucket, and your runners are all running on the same network.
- S3 uploads speed benchmark: AWS GCP and Scaleway
sccache
-
Speeding up C++ build times
Use icecream or sccache. sccache supports distributed builds.
https://github.com/mozilla/sccache/blob/main/docs/Distribute...
-
Mozilla sccache: cache with cloud storage
Worth noting that the first commit in sccache git repository was in 2014 (https://github.com/mozilla/sccache/commit/115016e0a83b290dc2...). So I suppose that what "happened" happened waay back.
- Welcome to Apache OpenDAL
-
Target file are very huge and running out of storage on mac.
If you have lots of shared dependencies, maybe try sccache?
-
S3 Express Is All You Need
I'm going to set up sccache [0] to use it tomorrow. We use MSVC, so EFS is off the cards.
[0] https://github.com/mozilla/sccache/blob/main/docs/S3.md
- sccache
-
Serde has started shipping precompiled binaries with no way to opt out
I think the primary benefit of pre-built procmacros will be for build servers which don't use a persistent cache (like sccache), since they have to compile all dependencies every time. But IMO improved support for persistent caches would be a better investment compared to adding support for pre-built procmacros.
-
Cache dependencies across crates
Checkout https://github.com/mozilla/sccache
-
Distcc: A fast, free distributed C/C++ compiler
https://github.com/mozilla/sccache is another option which addresses the use cases of both icecream and ccache (and also supports Rust, and cloud storage of artifacts, if those are useful for you)
-
How to fix Rust Coding LARGE files????
That being said a compilation cache, eg the de-facto standard for Rust: sccache (https://github.com/mozilla/sccache) will help to compile and store some of the build artifacts centralized - still for each crate version + build profile (RUSTFLAGS) combination.
What are some alternatives?
ccache - ccache – a fast compiler cache
cargo-chef - A cargo-subcommand to speed up Rust Docker builds using Docker layer caching.
rust-cache - A GitHub Action that implements smart caching for rust/cargo projects
cache - Cache dependencies and build outputs in GitHub Actions
icecream - Distributed compiler with a central scheduler to share build load
mold - Mold: A Modern Linker 🦠
fluvio - Lean and mean distributed stream processing system written in rust and web assembly.
gdnative - Rust bindings for Godot 3
zapcc - zapcc is a caching C++ compiler based on clang, designed to perform faster compilations
criterion.rs - Statistics-driven benchmarking library for Rust
proc-macro-workshop - Learn to write Rust procedural macros [Rust Latam conference, Montevideo Uruguay, March 2019]
rustc_codegen_cranelift - Cranelift based backend for rustc