Folly
Seastar
Our great sponsors
Folly | Seastar | |
---|---|---|
88 | 25 | |
26,926 | 7,954 | |
1.0% | 1.4% | |
9.8 | 9.7 | |
7 days ago | 7 days ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Folly
-
A lock-free ring-buffer with contiguous reservations (2019)
My interpretation is that with release semantics for the store, the 2nd read (load) in Thread 1 is actually allowed to be reordered before the release store to the hazard pointer. But they are not very explicit about it.
> So if thread 2 removing the pointer happens first, thread 1 will see a different value on its second read and not attempt to dereference it.
Thread 1 will see thread 2's remove even with release semantics for that store -- the store has a data dependency on the first load; they cannot be reordered.
> If thread 1 writes to its hazard pointer first, the garbage collector is guaranteed to see that value and not delete the node.
Yeah, this must be it. Thread 1 fails to notice the GC happened while it was writing its HP because its second load actually happened before the HP store.
Folly's hazard pointer implementation uses a release store to update the hazard pointer (here: reset_protection()), but uses some sort of SeqCst barrier between the store and the 2nd load (with acquire semantics): https://github.com/facebook/folly/blob/main/folly/synchroniz...
To set a HP on Linux, Folly just does a relaxed load of the src pointer, release store of the HP, compiler-only barrier, and acquire load. (This prevents the compiler from reordering the 2nd load before the store, right? But to my understanding does not prevent a hypothetical CPU reordering of the 2nd load before the store, which seems potentially problematic!)
Then on the GC/reclaim side of things, after protected object pointers are stored, it does a more expensive barrier[0] before acquire-loading the HPs.
I'll admit, I am not confident I understand why this works. I mean, even on x86, loads can be reordered before earlier program-order stores. So it seems like the 2nd check on the protection side could be ineffective. (The non-Linux portable version just uses an atomic_thread_fence SeqCst on both sides, which seems more obviously correct.) And if they don't need the 2nd load on Linux, I'm unclear on why they do it.
[0]: https://github.com/facebook/folly/blob/main/folly/synchroniz...
(This uses either mprotect to force a TLB flush in process-relevant CPUs, or the newer Linux membarrier syscall if available.)
-
Appending to an std:string character-by-character: how does the capacity grow?
folly provides functions to resize std::string & std::vector without initialization [0].
[0] https://github.com/facebook/folly/blob/3c8829785e3ce86cb821c...
-
A Compressed Indexable Bitset
> How is that relevant?
Roaring bitmaps and similar data structures get their speed from decoding together consecutive groups of elements, so if you do sequential decoding or decode a large fraction of the list you get excellent performance.
EF instead excels at random skipping, so if you visit a small fraction of the list you generally get better performance. This is why it works so well for inverted indexes, as generally the queries are very selective (otherwise why do you need an index?) and if you have good intersection algorithms you can skip a large fraction of documents.
I didn't follow the rest of your comment, select is what EF is good at, every other data structure needs a lot more scanning once you land on the right chunk. With BMI2 you can also use the PDEP instruction to accelerate the final select on a 64-bit block: https://github.com/facebook/folly/blob/main/folly/experiment...
The EF core algorithm implemented in folly [3] may be a bit faster, and implementing partitioning on top of that is relatively easy.
It would definitely compress much better than roaring bitmaps. In terms of performance, it depends on the access patterns. If very sparse (large jumps) PEF would likely be faster, if dense (visit a large fraction of the bitmap) it'd be slower.
It is possible to squeeze a bit more compression out of PEF by introducing a chunk type for Elias-Fano of the chunk complement (for very dense chunks), but you lose the operation of skipping to a given position, which is however not needed in inverted indexes (you only need to skip past a given id, and that can be supported efficiently). That is not mentioned in the paper because at the time I thought the skip-to-position operation was a non-negotiable.
[1] https://github.com/ot/ds2i/
[2] https://github.com/pisa-engine/pisa
[3] https://github.com/facebook/folly/blob/main/folly/experiment...
-
How a Single Line of Code Made a 24-Core Server Slower Than a Laptop
Can't speak for abseil and tbb, but in folly there are a few solutions for the common problem of sharing state between a writer that updates it very infrequently and concurrent readers that read it very frequently (typical use case is configs).
The most performant solutions are RCU (https://github.com/facebook/folly/blob/main/folly/synchroniz...) and hazard pointers (https://github.com/facebook/folly/blob/main/folly/synchroniz...), but they're not quite as easy to use as a shared_ptr [1].
Then there is simil-shared_ptr implemented with thread-local counters (https://github.com/facebook/folly/blob/main/folly/experiment...).
If you absolutely need a std::shared_ptr (which can be the case if you're working with pre-existing interfaces) there is CoreCachedSharedPtr (https://github.com/facebook/folly/blob/main/folly/concurrenc...), which uses an aliasing trick to transparently maintain per-core reference counts, and scales linearly, but it works only when acquiring the shared_ptr, any subsequent copies of that would still cause contention if passed around in threads.
[1] Google has a proposal to make a smart pointer based on RCU/hazptr, but I'm not a fan of it because generally RCU/hazptr guards need to be released in the same thread that acquired them, and hiding them in a freely movable object looks like a recipe for disaster to me, especially if paired with coroutines https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p05...
-
Ask HN: What are some of the most elegant codebases in your favorite language?
Not sure if it's still the case but about 6 years ago Facebook's folly C++ library was something I'd point to for my junior engineers to get a sense of "good" C++ https://github.com/facebook/folly
-
DynaMix 2.0.0 Released
https://github.com/facebook/folly/blob/master/folly/docs/Poly.md Folly.Poly
-
Deduplicating a Slice in Go
Most modern hash map designs don't do this weird shuffle with buckets and linked lists because pointer chasing is super expensive.
https://doc.rust-lang.org/std/collections/struct.HashMap.htm...
Because it's documenting the actual API in the standard this even spells out that the result has "at least the specified capacity"
https://github.com/facebook/folly/blob/main/folly/container/...
F14 is a linear map but I couldn't immediately find actual API documentation, however it should have the same property where if you ask for an F14 with specific capacity or you reserve enough capacity, that's an "at least" promise not an approximate one.
-
rust-like traits on plain C++ with short macro (type erasure actually)
Or dyno or Poly or Not-Actually-Boost.TE or ...
Seastar
-
I want to share my latest hobby project, dbeel: A distributed thread-per-core nosql db written in rust
I used glommio as the async executor (instead of something like tokio), and it is wonderful. For people wondering whether it's "good enough" or to use C++ and seastar (as I have thought about a lot before starting this project), take the leap of faith, it's fast - both in terms of run time and to code.
-
How much reason is there to be multi-threaded in the k8s environment
b) It's proven now e.g Seastar, Glommio that the fastest way to run a multi-threaded application is to have one instance with one thread pinned per CPU core. Then to have fibers/lightweight threads on top handling all of the asynchronous code. Your approach of lots of instances is the slowest so there will be a ton of unnecessary thread context-switching.
-
Are You Sure You Want to Use MMAP in Your Database Management System?
The most common example is DPDK [1]. It's a framework for building bespoke networking stacks that are usable from userspace, without involving the kernel.
You'll find DPDK mentioned a lot in the networking/HPC/data center literature. An example of a backend framework that uses DPDK is the seastar framework [2]. Also, I recently stumbled upon a paper for efficient RPC networks in data centers [3].
If you want to learn more, the p99 conference by ScyllaDB has tons of speakers talking about some interesting challenges.
-
Why does Actix-web's handler not require Send?
I assume Tokio itself, see e.g monoio or glommio, but also Seastar for C++.
-
What are some C++ projects with high quality code that I can read through?
Seastar which is a thread per core runtime written by the Scylla devs thats used in both Redpanda and Scylla as the underlying runtime. https://github.com/scylladb/seastar
-
Modern JVM Multithreading • Paweł Jurczenko • Devoxx Poland 2021
I’ve seen frameworks for c++ (https://seastar.io/) and rust (https://github.com/actix/actix) which support what you’re describing out of the box.
-
Who is using C++ for web development?
If you're interested in scaling and asynchronous programming in c++ I highly recommend you investigate the SeaStar application framework. You wouldn't build a web service with SeaStar, rather you would build the infrastructure that you would use to build the web service on top of. https://github.com/scylladb/seastar
-
Why we built our streaming data platform in C++
C++ also allows us to control as much as possible from the platform. Through the efficiency of our own code, combined with the amazing Seastar framework and other best-in-class libraries, Redpanda speaks directly to the hardware. It only depends on the Linux kernel to launch the process, after which Redpanda is very deterministic in terms of performance, runtime characteristics, memory utilization, and CPU speed. We own the entire end-to-end experience, which provides safety and allows Redpanda to build impactful products.
- Do Not Let C++ Become A Victim Of Suggestive Terminology
-
How to make an HTTP client from scratch
The Seastar framework offers a great HTTP server implementation, which is used by ScyllaDB and Redpanda. However, Seastar doesn’t have an HTTP client library that can be easily used with Seastar framework. So we made one.
What are some alternatives?
abseil-cpp - Abseil Common Libraries (C++)
Boost - Super-project for modularized Boost
glommio - Glommio is a thread-per-core crate that makes writing highly parallel asynchronous applications in a thread-per-core architecture easier for rustaceans.
parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.
EASTL - Obsolete repo, please go to: https://github.com/electronicarts/EASTL
Boost.Asio - Asio C++ Library
Qt - Qt Base (Core, Gui, Widgets, Network, ...)
OpenFrameworks - openFrameworks is a community-developed cross platform toolkit for creative coding in C++.
cppcoro - A library of C++ coroutine abstractions for the coroutines TS
ffead-cpp - Framework for Enterprise Application Development in c++, HTTP1/HTTP2/HTTP3 compliant, Supports multiple server backends