napkin-math
sccache
napkin-math | sccache | |
---|---|---|
13 | 71 | |
3,093 | 5,425 | |
- | 2.7% | |
6.3 | 9.4 | |
11 days ago | 6 days ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
napkin-math
- capacity planning in system design interviews
- Napkin Math
-
S3 Express Is All You Need
Most production storage systems/databases built on top of S3 spend a significant amount of effort building an SSD/memory caching tier to make them performant enough for production (e.g. on top of RocksDB). But it's not easy to keep it in sync with blob...
Even with the cache, the cold query latency lower-bound to S3 is subject to ~50ms roundtrips [0]. To build a performant system, you have to tightly control roundtrips. S3 Express changes that equation dramatically, as S3 Express approaches HDD random read speeds (single-digit ms), so we can build production systems that don't need an SSD cache—just the zero-copy, deserialized in-memory cache.
Many systems will probably continue to have an SSD cache (~100 us random reads), but now MVPs can be built without it, and cold query latency goes down dramatically. That's a big deal
We're currently building a vector database on top of object storage, so this is extremely timely for us... I hope GCS ships this ASAP. [1]
[0]: https://github.com/sirupsen/napkin-math
-
Random Read or Sequential Read
Trying to estimate performance using some napkin math based on this: https://github.com/sirupsen/napkin-math
-
A CVE has been issued for hyper. Denial of Service possible
So napkin maths time. Typical cross-world bog-standard network speeds for a single TCP channel of ~25MiBps. A single HEADERS+RST pair is likely < 128 bytes (40 for the HEADERS + whatever payload, and 32 for the RST). So 8 pairs per K, 8K pairs per MiB, 200K pairs per 25MiB...
- Index Merges vs Composite Indexes in Postgres and MySQL
-
I/O is no longer the bottleneck
Yes, sequential I/O bandwidth is closing the gap to memory. [1] The I/O pattern to watch out for, and the biggest reason why e.g. databases do careful caching to memory, is that _random_ I/O is still dreadfully slow. I/O bandwidth is brilliant, but latency is still disappointing compared to memory.
[1]: https://github.com/sirupsen/napkin-math
- Monthly cost to host server for 1M DAUs?
- Napkin-math: Techniques and numbers for estimating system's performance
-
System Design prep?
https://github.com/sirupsen/napkin-math (memorize these)
sccache
-
Speeding up C++ build times
Use icecream or sccache. sccache supports distributed builds.
https://github.com/mozilla/sccache/blob/main/docs/Distribute...
-
Mozilla sccache: cache with cloud storage
Worth noting that the first commit in sccache git repository was in 2014 (https://github.com/mozilla/sccache/commit/115016e0a83b290dc2...). So I suppose that what "happened" happened waay back.
- Welcome to Apache OpenDAL
-
Target file are very huge and running out of storage on mac.
If you have lots of shared dependencies, maybe try sccache?
-
S3 Express Is All You Need
I'm going to set up sccache [0] to use it tomorrow. We use MSVC, so EFS is off the cards.
[0] https://github.com/mozilla/sccache/blob/main/docs/S3.md
- sccache
-
Serde has started shipping precompiled binaries with no way to opt out
I think the primary benefit of pre-built procmacros will be for build servers which don't use a persistent cache (like sccache), since they have to compile all dependencies every time. But IMO improved support for persistent caches would be a better investment compared to adding support for pre-built procmacros.
-
Cache dependencies across crates
Checkout https://github.com/mozilla/sccache
-
Distcc: A fast, free distributed C/C++ compiler
https://github.com/mozilla/sccache is another option which addresses the use cases of both icecream and ccache (and also supports Rust, and cloud storage of artifacts, if those are useful for you)
-
How to fix Rust Coding LARGE files????
That being said a compilation cache, eg the de-facto standard for Rust: sccache (https://github.com/mozilla/sccache) will help to compile and store some of the build artifacts centralized - still for each crate version + build profile (RUSTFLAGS) combination.
What are some alternatives?
huniq - Filter out duplicates on the command line. Replacement for `sort | uniq` optimized for speed (10x faster) when sorting is not needed.
ccache - ccache – a fast compiler cache
advisory-database - Security vulnerability database inclusive of CVEs and GitHub originated security advisories from the world of open source software.
cargo-chef - A cargo-subcommand to speed up Rust Docker builds using Docker layer caching.
adix - An Adaptive Index Library for Nim
rust-cache - A GitHub Action that implements smart caching for rust/cargo projects
h2 - HTTP 2.0 client & server implementation for Rust.
cache - Cache dependencies and build outputs in GitHub Actions
RAMCloud - **No Longer Maintained** Official RAMCloud repo
icecream - Distributed compiler with a central scheduler to share build load
simdjson - Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks
mold - Mold: A Modern Linker 🦠