llama
sccache
llama | sccache | |
---|---|---|
10 | 71 | |
578 | 5,365 | |
- | 1.6% | |
4.0 | 9.4 | |
about 2 months ago | 3 days ago | |
Go | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama
- Llama – A CLI for outsourcing computation to AWS Lambda
- llama
- Distcc: A fast, free distributed C/C++ compiler
-
Distributed Cloud Builds for Everyone
I was surprised there wasn't a more obvious link, but the implementation of the ideas in the post is in open-source CLI tool here https://github.com/nelhage/llama
- Llama: CLI for outsourcing computation to Amazon Lambda
-
Outrun: Execute local command using processing power of another Linux machine
See also llama:
https://github.com/nelhage/llama
> Llama is a tool for running UNIX commands inside of Amazon Lambda. Its goal is to make it easy to outsource compute-heavy tasks to Lambda, with its enormous available parallelism, from your shell.
sccache
-
Speeding up C++ build times
Use icecream or sccache. sccache supports distributed builds.
https://github.com/mozilla/sccache/blob/main/docs/Distribute...
-
Mozilla sccache: cache with cloud storage
Worth noting that the first commit in sccache git repository was in 2014 (https://github.com/mozilla/sccache/commit/115016e0a83b290dc2...). So I suppose that what "happened" happened waay back.
- Welcome to Apache OpenDAL
-
Target file are very huge and running out of storage on mac.
If you have lots of shared dependencies, maybe try sccache?
-
S3 Express Is All You Need
I'm going to set up sccache [0] to use it tomorrow. We use MSVC, so EFS is off the cards.
[0] https://github.com/mozilla/sccache/blob/main/docs/S3.md
- sccache
-
Serde has started shipping precompiled binaries with no way to opt out
I think the primary benefit of pre-built procmacros will be for build servers which don't use a persistent cache (like sccache), since they have to compile all dependencies every time. But IMO improved support for persistent caches would be a better investment compared to adding support for pre-built procmacros.
-
Cache dependencies across crates
Checkout https://github.com/mozilla/sccache
-
Distcc: A fast, free distributed C/C++ compiler
https://github.com/mozilla/sccache is another option which addresses the use cases of both icecream and ccache (and also supports Rust, and cloud storage of artifacts, if those are useful for you)
-
How to fix Rust Coding LARGE files????
That being said a compilation cache, eg the de-facto standard for Rust: sccache (https://github.com/mozilla/sccache) will help to compile and store some of the build artifacts centralized - still for each crate version + build profile (RUSTFLAGS) combination.
What are some alternatives?
bazel-buildfarm - Bazel remote caching and execution service
ccache - ccache – a fast compiler cache
icecream - Distributed compiler with a central scheduler to share build load
cargo-chef - A cargo-subcommand to speed up Rust Docker builds using Docker layer caching.
OpenAFS - Fork of OpenAFS from git.openafs.org for visualization
rust-cache - A GitHub Action that implements smart caching for rust/cargo projects
remote-apis - An API for caching and execution of actions on a remote system.
cache - Cache dependencies and build outputs in GitHub Actions
cargo-mutants - :zombie: Inject bugs and see if your tests catch them!
recc
mold - Mold: A Modern Linker 🦠