t-digest
Folly
Our great sponsors
t-digest | Folly | |
---|---|---|
9 | 88 | |
1,914 | 26,926 | |
- | 1.0% | |
3.3 | 9.8 | |
3 months ago | 7 days ago | |
Java | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
t-digest
-
Ask HN: What are some 'cool' but obscure data structures you know about?
I am enamored by data structures in the sketch/summary/probabilistic family: t-digest[1], q-digest[2], count-min sketch[3], matrix-sketch[4], graph-sketch[5][6], Misra-Gries sketch[7], top-k/spacesaving sketch[8], &c.
What I like about them is that they give me a set of engineering tradeoffs that I typically don't have access to: accuracy-speed[9] or accuracy-space. There have been too many times that I've had to say, "I wish I could do this, but it would take too much time/space to compute." Most of these problems still work even if the accuracy is not 100%. And furthermore, many (if not all of these) can tune accuracy to by parameter adjustment anyways. They tend to have favorable combinatorial properties ie: they form monoids or semigroups under merge operations. In short, a property of data structures that gave me the ability to solve problems I couldn't before.
I hope they are as useful or intriguing to you as they are to me.
1. https://github.com/tdunning/t-digest
2. https://pdsa.readthedocs.io/en/latest/rank/qdigest.html
3. https://florian.github.io/count-min-sketch/
4. https://www.cs.yale.edu/homes/el327/papers/simpleMatrixSketc...
5. https://www.juanlopes.net/poly18/poly18-juan-lopes.pdf
6. https://courses.engr.illinois.edu/cs498abd/fa2020/slides/20-...
7. https://people.csail.mit.edu/rrw/6.045-2017/encalgs-mg.pdf
8. https://www.sciencedirect.com/science/article/abs/pii/S00200...
9. It may better be described as error-speed and error-space, but I've avoided the term error because the term for programming audiences typically evokes the idea of logic errors and what I mean is statistical error.
On sketches, there is a genre of structure for estimating histogram-like statistics (median, 99th centile, etc) in fixed space, which i really like. Two examples:
t-digest https://github.com/tdunning/t-digest
-
Monarch: Google’s Planet-Scale In-Memory Time Series Database
Ah, I misunderstood what you meant. If you are reporting static buckets I get how that is better than what folks typically do but how do you know the buckets a priori? Others back their histograms with things like https://github.com/tdunning/t-digest. It is pretty powerful as the buckets are dynamic based on the data and histograms can be added together.
-
How percentile approximation works (and why it's more useful than averages)
There are some newer data structures that take this to the next level such as T-Digest[1], which remains extremely accurate even when determining percentiles at the very tail end (like 99.999%)
[1]: https://arxiv.org/pdf/1902.04023.pdf / https://github.com/tdunning/t-digest
-
Show HN: Fast Rolling Quantiles for Python
This is pretty cool. The title would be a bit more descriptive if it were “Fast Rolling Quantile Filters for Python”, since the high-pass/low-pass filter functionality seems to be the focus.
The README mentions it uses binary heaps - if you’re willing to accept some (bounded) approximation, then it should be possible to reduce memory usage and somewhat reduce runtime by using a sketching data structure like Dunning’s t-digest: https://github.com/tdunning/t-digest/blob/main/docs/t-digest....
There is an open source Python implementation, although I haven’t used it and can’t vouch for its quality: https://github.com/CamDavidsonPilon/tdigest
Folly
-
A lock-free ring-buffer with contiguous reservations (2019)
My interpretation is that with release semantics for the store, the 2nd read (load) in Thread 1 is actually allowed to be reordered before the release store to the hazard pointer. But they are not very explicit about it.
> So if thread 2 removing the pointer happens first, thread 1 will see a different value on its second read and not attempt to dereference it.
Thread 1 will see thread 2's remove even with release semantics for that store -- the store has a data dependency on the first load; they cannot be reordered.
> If thread 1 writes to its hazard pointer first, the garbage collector is guaranteed to see that value and not delete the node.
Yeah, this must be it. Thread 1 fails to notice the GC happened while it was writing its HP because its second load actually happened before the HP store.
Folly's hazard pointer implementation uses a release store to update the hazard pointer (here: reset_protection()), but uses some sort of SeqCst barrier between the store and the 2nd load (with acquire semantics): https://github.com/facebook/folly/blob/main/folly/synchroniz...
To set a HP on Linux, Folly just does a relaxed load of the src pointer, release store of the HP, compiler-only barrier, and acquire load. (This prevents the compiler from reordering the 2nd load before the store, right? But to my understanding does not prevent a hypothetical CPU reordering of the 2nd load before the store, which seems potentially problematic!)
Then on the GC/reclaim side of things, after protected object pointers are stored, it does a more expensive barrier[0] before acquire-loading the HPs.
I'll admit, I am not confident I understand why this works. I mean, even on x86, loads can be reordered before earlier program-order stores. So it seems like the 2nd check on the protection side could be ineffective. (The non-Linux portable version just uses an atomic_thread_fence SeqCst on both sides, which seems more obviously correct.) And if they don't need the 2nd load on Linux, I'm unclear on why they do it.
[0]: https://github.com/facebook/folly/blob/main/folly/synchroniz...
(This uses either mprotect to force a TLB flush in process-relevant CPUs, or the newer Linux membarrier syscall if available.)
-
Appending to an std:string character-by-character: how does the capacity grow?
folly provides functions to resize std::string & std::vector without initialization [0].
[0] https://github.com/facebook/folly/blob/3c8829785e3ce86cb821c...
-
A Compressed Indexable Bitset
> How is that relevant?
Roaring bitmaps and similar data structures get their speed from decoding together consecutive groups of elements, so if you do sequential decoding or decode a large fraction of the list you get excellent performance.
EF instead excels at random skipping, so if you visit a small fraction of the list you generally get better performance. This is why it works so well for inverted indexes, as generally the queries are very selective (otherwise why do you need an index?) and if you have good intersection algorithms you can skip a large fraction of documents.
I didn't follow the rest of your comment, select is what EF is good at, every other data structure needs a lot more scanning once you land on the right chunk. With BMI2 you can also use the PDEP instruction to accelerate the final select on a 64-bit block: https://github.com/facebook/folly/blob/main/folly/experiment...
The EF core algorithm implemented in folly [3] may be a bit faster, and implementing partitioning on top of that is relatively easy.
It would definitely compress much better than roaring bitmaps. In terms of performance, it depends on the access patterns. If very sparse (large jumps) PEF would likely be faster, if dense (visit a large fraction of the bitmap) it'd be slower.
It is possible to squeeze a bit more compression out of PEF by introducing a chunk type for Elias-Fano of the chunk complement (for very dense chunks), but you lose the operation of skipping to a given position, which is however not needed in inverted indexes (you only need to skip past a given id, and that can be supported efficiently). That is not mentioned in the paper because at the time I thought the skip-to-position operation was a non-negotiable.
[1] https://github.com/ot/ds2i/
[2] https://github.com/pisa-engine/pisa
[3] https://github.com/facebook/folly/blob/main/folly/experiment...
-
How a Single Line of Code Made a 24-Core Server Slower Than a Laptop
Can't speak for abseil and tbb, but in folly there are a few solutions for the common problem of sharing state between a writer that updates it very infrequently and concurrent readers that read it very frequently (typical use case is configs).
The most performant solutions are RCU (https://github.com/facebook/folly/blob/main/folly/synchroniz...) and hazard pointers (https://github.com/facebook/folly/blob/main/folly/synchroniz...), but they're not quite as easy to use as a shared_ptr [1].
Then there is simil-shared_ptr implemented with thread-local counters (https://github.com/facebook/folly/blob/main/folly/experiment...).
If you absolutely need a std::shared_ptr (which can be the case if you're working with pre-existing interfaces) there is CoreCachedSharedPtr (https://github.com/facebook/folly/blob/main/folly/concurrenc...), which uses an aliasing trick to transparently maintain per-core reference counts, and scales linearly, but it works only when acquiring the shared_ptr, any subsequent copies of that would still cause contention if passed around in threads.
[1] Google has a proposal to make a smart pointer based on RCU/hazptr, but I'm not a fan of it because generally RCU/hazptr guards need to be released in the same thread that acquired them, and hiding them in a freely movable object looks like a recipe for disaster to me, especially if paired with coroutines https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p05...
-
Ask HN: What are some of the most elegant codebases in your favorite language?
Not sure if it's still the case but about 6 years ago Facebook's folly C++ library was something I'd point to for my junior engineers to get a sense of "good" C++ https://github.com/facebook/folly
-
DynaMix 2.0.0 Released
https://github.com/facebook/folly/blob/master/folly/docs/Poly.md Folly.Poly
-
Deduplicating a Slice in Go
Most modern hash map designs don't do this weird shuffle with buckets and linked lists because pointer chasing is super expensive.
https://doc.rust-lang.org/std/collections/struct.HashMap.htm...
Because it's documenting the actual API in the standard this even spells out that the result has "at least the specified capacity"
https://github.com/facebook/folly/blob/main/folly/container/...
F14 is a linear map but I couldn't immediately find actual API documentation, however it should have the same property where if you ask for an F14 with specific capacity or you reserve enough capacity, that's an "at least" promise not an approximate one.
-
rust-like traits on plain C++ with short macro (type erasure actually)
Or dyno or Poly or Not-Actually-Boost.TE or ...
What are some alternatives?
abseil-cpp - Abseil Common Libraries (C++)
Boost - Super-project for modularized Boost
Seastar - High performance server-side application framework
parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.
EASTL - Obsolete repo, please go to: https://github.com/electronicarts/EASTL
Qt - Qt Base (Core, Gui, Widgets, Network, ...)
OpenFrameworks - openFrameworks is a community-developed cross platform toolkit for creative coding in C++.
cppcoro - A library of C++ coroutine abstractions for the coroutines TS
Cinder - Cinder is a community-developed, free and open source library for professional-quality creative coding in C++.
Loki - Loki is a C++ library of designs, containing flexible implementations of common design patterns and idioms.
STXXL - STXXL: Standard Template Library for Extra Large Data Sets
react-native-debugger - The standalone app based on official debugger of React Native, and includes React Inspector / Redux DevTools