napkin-math
simdjson
napkin-math | simdjson | |
---|---|---|
13 | 65 | |
3,031 | 18,496 | |
- | 1.1% | |
6.3 | 9.2 | |
20 days ago | 3 days ago | |
Rust | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
napkin-math
- capacity planning in system design interviews
- Napkin Math
-
S3 Express Is All You Need
Most production storage systems/databases built on top of S3 spend a significant amount of effort building an SSD/memory caching tier to make them performant enough for production (e.g. on top of RocksDB). But it's not easy to keep it in sync with blob...
Even with the cache, the cold query latency lower-bound to S3 is subject to ~50ms roundtrips [0]. To build a performant system, you have to tightly control roundtrips. S3 Express changes that equation dramatically, as S3 Express approaches HDD random read speeds (single-digit ms), so we can build production systems that don't need an SSD cache—just the zero-copy, deserialized in-memory cache.
Many systems will probably continue to have an SSD cache (~100 us random reads), but now MVPs can be built without it, and cold query latency goes down dramatically. That's a big deal
We're currently building a vector database on top of object storage, so this is extremely timely for us... I hope GCS ships this ASAP. [1]
[0]: https://github.com/sirupsen/napkin-math
-
Random Read or Sequential Read
Trying to estimate performance using some napkin math based on this: https://github.com/sirupsen/napkin-math
-
A CVE has been issued for hyper. Denial of Service possible
So napkin maths time. Typical cross-world bog-standard network speeds for a single TCP channel of ~25MiBps. A single HEADERS+RST pair is likely < 128 bytes (40 for the HEADERS + whatever payload, and 32 for the RST). So 8 pairs per K, 8K pairs per MiB, 200K pairs per 25MiB...
- Index Merges vs Composite Indexes in Postgres and MySQL
-
I/O is no longer the bottleneck
Yes, sequential I/O bandwidth is closing the gap to memory. [1] The I/O pattern to watch out for, and the biggest reason why e.g. databases do careful caching to memory, is that _random_ I/O is still dreadfully slow. I/O bandwidth is brilliant, but latency is still disappointing compared to memory.
[1]: https://github.com/sirupsen/napkin-math
- Monthly cost to host server for 1M DAUs?
- Napkin-math: Techniques and numbers for estimating system's performance
-
System Design prep?
https://github.com/sirupsen/napkin-math (memorize these)
simdjson
-
Tips on adding JSON output to your command line utility. (2021)
It's also supported by simdjson [0] (which has a lot of language bindings [1]):
> Multithreaded processing of gigantic Newline-Delimited JSON (ndjson) and related formats at 3.5 GB/s
[0] https://simdjson.org/
[0] https://github.com/simdjson/simdjson?tab=readme-ov-file#bind...
- 1BRC Merykitty's Magic SWAR: 8 Lines of Code Explained in 3k Words
- Training great LLMs from ground zero in the wilderness as a startup
- simdjson: Parsing Gigabytes of JSON per Second
-
Use any web browser as GUI, with Zig in the back end and HTML5 in the front end
String parsing is negligible compared to the speed of the DOM which is glacially slow: https://news.ycombinator.com/item?id=38835920
Come on, people, make an effort to learn how insanely fast computers are, and how insanely inefficient our software is.
String parsing can be done at gigabytes per second: https://github.com/simdjson/simdjson If you think that is the slowest operation in the browser, please find some resources that talk about what is actually happening in the browser?
-
Cray-1 performance vs. modern CPUs
Thanks for all the detailed information! That answers a bunch of my questions and the implementation of strlen is nice.
The instruction I was thinking of is pshufb. An example ‘weird’ use can be found for detecting white space in simdjson: https://github.com/simdjson/simdjson/blob/24b44309fb52c3e2c5...
This works as follows:
1. Observe that each ascii whitespace character ends with a different nibble.
2. Make some vector of 16 bytes which has the white space character whose final nibble is the index of the byte, or some other character with a different final nibble from the byte (eg first element is space =0x20, next could be eg 0xff but not 0xf1 as that ends in the same nibble as index)
3. For each block where you want to find white space, compute pcmpeqb(pshufb(whitespace, input), input). The rules of pshufb mean (a) non-ascii (ie bit 7 set) characters go to 0 so will compare false, (b) other characters are replaced with an element of whitespace according to their last nibble so will compare equal only if they are that whitespace character.
I’m not sure how easy it would be to do such tricks with vgather.vv. In particular, the length of the input doesn’t matter (could be longer) but the length of white space must be 16 bytes. I’m not sure how the whole vlen stuff interacts with tricks like this where you (a) require certain fixed lengths and (b) may have different lengths for tables and input vectors. (and indeed there might just be better ways, eg you could imagine an operation with a 256-bit register where you permute some vector of bytes by sign-extending the nth bit of the 256-bit register into the result where the input byte is n).
-
Codebases to read
Additionally, if you like low level stuff, check out libfmt (https://github.com/fmtlib/fmt) - not a big project, not difficult to understand. Or something like simdjson (https://github.com/simdjson/simdjson).
- Simdjson: Parsing Gigabytes of JSON per Second
-
Building a high performance JSON parser
Everything you said is totally reasonable. I'm a big fan of napkin math and theoretical upper bounds on performance.
simdjson (https://github.com/simdjson/simdjson) claims to fully parse JSON on the order of 3 GB/sec. Which is faster than OP's Go whitespace parsing! These tests are running on different hardware so it's not apples-to-apples.
The phrase "cannot go faster than this" is just begging for a "well ackshully". Which I hate to do. But the fact that there is an existence proof of Problem A running faster in C++ SIMD than OP's Probably B scalar Go is quite interesting and worth calling out imho. But I admit it doesn't change the rest of the post.
-
New package : lspce - a simple LSP Client for Emacs
I have same question as /u/JDRiverRun : how do you deal with JSON, do you parse json on Rust side or on Emacs side. I see that you are requiring json.el in your lspce.el, but I haven't looked through entire file carefully. If you parse on Rust side, do you use simdjson (there are at least two Rust bindings to it)? If yes, what are your impressions, experiences compared to more "standard" json library?
What are some alternatives?
huniq - Filter out duplicates on the command line. Replacement for `sort | uniq` optimized for speed (10x faster) when sorting is not needed.
RapidJSON - A fast JSON parser/generator for C++ with both SAX/DOM style API
advisory-database - Security vulnerability database inclusive of CVEs and GitHub originated security advisories from the world of open source software.
jsoniter - jsoniter (json-iterator) is fast and flexible JSON parser available in Java and Go
adix - An Adaptive Index Library for Nim
json - JSON for Modern C++
h2 - HTTP 2.0 client & server implementation for Rust.
json-schema-validator - JSON schema validator for JSON for Modern C++
RAMCloud - **No Longer Maintained** Official RAMCloud repo
JsonCpp - A C++ library for interacting with JSON.
Killed by Google - Part guillotine, part graveyard for Google's doomed apps, services, and hardware.
json - A C++11 library for parsing and serializing JSON to and from a DOM container in memory.