napkin-math
adix
napkin-math | adix | |
---|---|---|
13 | 4 | |
3,031 | 38 | |
- | - | |
6.3 | 7.2 | |
21 days ago | 9 days ago | |
Rust | Nim | |
MIT License | ISC License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
napkin-math
- capacity planning in system design interviews
- Napkin Math
-
S3 Express Is All You Need
Most production storage systems/databases built on top of S3 spend a significant amount of effort building an SSD/memory caching tier to make them performant enough for production (e.g. on top of RocksDB). But it's not easy to keep it in sync with blob...
Even with the cache, the cold query latency lower-bound to S3 is subject to ~50ms roundtrips [0]. To build a performant system, you have to tightly control roundtrips. S3 Express changes that equation dramatically, as S3 Express approaches HDD random read speeds (single-digit ms), so we can build production systems that don't need an SSD cache—just the zero-copy, deserialized in-memory cache.
Many systems will probably continue to have an SSD cache (~100 us random reads), but now MVPs can be built without it, and cold query latency goes down dramatically. That's a big deal
We're currently building a vector database on top of object storage, so this is extremely timely for us... I hope GCS ships this ASAP. [1]
[0]: https://github.com/sirupsen/napkin-math
-
Random Read or Sequential Read
Trying to estimate performance using some napkin math based on this: https://github.com/sirupsen/napkin-math
-
A CVE has been issued for hyper. Denial of Service possible
So napkin maths time. Typical cross-world bog-standard network speeds for a single TCP channel of ~25MiBps. A single HEADERS+RST pair is likely < 128 bytes (40 for the HEADERS + whatever payload, and 32 for the RST). So 8 pairs per K, 8K pairs per MiB, 200K pairs per 25MiB...
- Index Merges vs Composite Indexes in Postgres and MySQL
-
I/O is no longer the bottleneck
Yes, sequential I/O bandwidth is closing the gap to memory. [1] The I/O pattern to watch out for, and the biggest reason why e.g. databases do careful caching to memory, is that _random_ I/O is still dreadfully slow. I/O bandwidth is brilliant, but latency is still disappointing compared to memory.
[1]: https://github.com/sirupsen/napkin-math
- Monthly cost to host server for 1M DAUs?
- Napkin-math: Techniques and numbers for estimating system's performance
-
System Design prep?
https://github.com/sirupsen/napkin-math (memorize these)
adix
-
I/O is no longer the bottleneck
Note: Just concatenating the bibles keeps your hash map artificially small...which matters because as you correctly note the big deal is if you can fit the histogram in the L2 cache as noted elsewhere and this really matters if you go parallel where N CPUsL2 caches can speed things up a lot -- until* your histograms blow out CPU-private L2 cache sizes. https://github.com/c-blake/adix/blob/master/tests/wf.nim (or a port to your favorite lang) might make it easy to play with these ideas.
-
A Cost Model for Nim
which is notably logarithmic - not unlike a B-Tree.
When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".
All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables.
-
Performance comparison: counting words in Python, Go, C++, C, Awk, Forth, Rust
Knuth-McIlroy comes up a lot. Previous discussion [1]. For this example I can make a Nim program run almost exactly the same speed as `wc -w`, yet the optimized C program runs 1.2x faster not 3.34x slower - a whopping 4x discrepancy - much bigger than many of the ratios in the table. So, people should be very cautious about conclusions from any of this.
[1] https://news.ycombinator.com/item?id=24817594
[2] https://github.com/c-blake/adix/blob/master/tests/wf.nim
What are some alternatives?
huniq - Filter out duplicates on the command line. Replacement for `sort | uniq` optimized for speed (10x faster) when sorting is not needed.
countwords - Playing with counting word frequencies (and performance) in various languages.
advisory-database - Security vulnerability database inclusive of CVEs and GitHub originated security advisories from the world of open source software.
RAMCloud - **No Longer Maintained** Official RAMCloud repo
h2 - HTTP 2.0 client & server implementation for Rust.
wordcount - Counting words in different programming languages.
KindleClippingsTranslator - Czytacz slowek
simdjson - Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks
tiny_sqlite - A thin SQLite wrapper for Nim
Killed by Google - Part guillotine, part graveyard for Google's doomed apps, services, and hardware.
word_frequency_nim - The word frequency program, written in simple nim.