libunifex
parallel-hashmap
libunifex | parallel-hashmap | |
---|---|---|
22 | 31 | |
1,366 | 2,326 | |
2.5% | - | |
7.6 | 7.8 | |
10 days ago | 29 days ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
libunifex
-
Comparing asio to unifex
I'm curious what led you to this conclusion. If you ran into scalability issues with its static_thread_pool, then that's a known issue. If it's something else, the authors (of which I'm one) would love to know.
-
How does one actually build a C++ project
Instead of calling add_executable you will call add_library. Here is a (only moderately complicated) production example of a library that can be built standalone (along with tests and example executables), or as a subproject, where it builds only the library
-
How to write networking code now that will be easiest to adapt to the upcoming standard?
My original thought was to build my DDS implementation on top of libunifex in anticipation for standardization: https://github.com/facebookexperimental/libunifex
-
Executors/libunifex example project
I'm trying to understand how to work with the proposed executors in a project, but after watching Eric Niebler's cppcon talks (https://youtu.be/xLboNIf7BTg) and looking at the libunifex examples (https://github.com/facebookexperimental/libunifex/tree/main/examples) I still have a hard time wrapping my head around how to employ the sender/receiver pattern in a larger project.
-
Async/Await pattern in C++
You have coroutines in C++20 but there is also the executives proposal that's making it's way into C++23 that is available as a library under the name unifex that only requires C++14
-
Using Asio for asynchronous gRPC clients and servers
Asio-grpc makes exactly that possible by providing an Asio execution_context compatible interface to the CompletionQueue. It supports all types of RPCs (including generic ones), completion tokens, cancellation, as well as libunifex sender/receiver (if you want to try out what might become std::execution). The latest release (v1.7.0) also introduced a GrpcStream class for writing Rust/Golang select-style code.
-
My thoughts and dreams about a standard user-space I/O scheduler
P2300: they are trying to standardize facebookexperimental/libunifex
-
"C++ makes it harder to shoot yourself, but when you do it blows your whole leg off"
All the network handling for Instagram and all other Meta apps on all platforms is handled by their own C++ library https://github.com/facebookexperimental/libunifex.
-
State of the art for CPOs (customization points) in C++?
This. I'd also like to mention libunifex. It's entirely based on tag_invoke and is a testament as to how much power it actually provides. On the other hand, it also proves how cumbersome it is to define CPOs with tag_invoke. But IMO it's a lot better than anything else anyone has ever created, and users usually don't need to define new CPOs, only library writers do, so there's that.
-
Why do we need networking, executors, linear algebra, etc in the Standard Library?
A work in progress implementation of the library: https://github.com/facebookexperimental/libunifex
parallel-hashmap
-
The One Billion Row Challenge in CUDA: from 17 minutes to 17 seconds
Standard library maps/unordered_maps are themselves notoriously slow anyway. A sparse_hash_map from abseil or parallel-hashmaps[1] would be better.
[1] https://github.com/greg7mdp/parallel-hashmap
-
My own Concurrent Hash Map picks
Cool! Looking forward to you trying my phmap - and please let me know if you have any question.
-
Boost 1.81 will have boost::unordered_flat_map...
I do this as well in my phmap and gtl implementations. It makes the tables look worse in benchmarks like the above, but prevents really bad surprises occasionally.
-
Comprehensive C++ Hashmap Benchmarks 2022
Thanks a lot for the great benchmark, Martin. Glad you used different hash functions, because I do sacrifice some speed to make sure that the performance of my hash maps doesn't degrade drastically with poor hash functions. Happy to see that my phmap and gtl (the C++20 version) performed well.
-
Can C++ maps be as efficient as Python dictionaries ?
I use https://github.com/greg7mdp/parallel-hashmap when I need better performance of maps and sets.
-
How to build a Chess Engine, an interactive guide
Then they should really try https://github.com/greg7mdp/parallel-hashmap, the current state of the art.
-
boost::unordered map is a new king of data structures
Unordered hash map shootout CMAP = https://github.com/tylov/STC KMAP = https://github.com/attractivechaos/klib PMAP = https://github.com/greg7mdp/parallel-hashmap FMAP = https://github.com/skarupke/flat_hash_map RMAP = https://github.com/martinus/robin-hood-hashing HMAP = https://github.com/Tessil/hopscotch-map TMAP = https://github.com/Tessil/robin-map UMAP = std::unordered_map Usage: shootout [n-million=40 key-bits=25] Random keys are in range [0, 2^25). Seed = 1656617916: T1: Insert/update random keys: KMAP: time: 1.949, size: 15064129, buckets: 33554432, sum: 165525449561381 CMAP: time: 1.649, size: 15064129, buckets: 22145833, sum: 165525449561381 PMAP: time: 2.434, size: 15064129, buckets: 33554431, sum: 165525449561381 FMAP: time: 2.112, size: 15064129, buckets: 33554432, sum: 165525449561381 RMAP: time: 1.708, size: 15064129, buckets: 33554431, sum: 165525449561381 HMAP: time: 2.054, size: 15064129, buckets: 33554432, sum: 165525449561381 TMAP: time: 1.645, size: 15064129, buckets: 33554432, sum: 165525449561381 UMAP: time: 6.313, size: 15064129, buckets: 31160981, sum: 165525449561381 T2: Insert sequential keys, then remove them in same order: KMAP: time: 1.173, size: 0, buckets: 33554432, erased 20000000 CMAP: time: 1.651, size: 0, buckets: 33218751, erased 20000000 PMAP: time: 3.840, size: 0, buckets: 33554431, erased 20000000 FMAP: time: 1.722, size: 0, buckets: 33554432, erased 20000000 RMAP: time: 2.359, size: 0, buckets: 33554431, erased 20000000 HMAP: time: 0.849, size: 0, buckets: 33554432, erased 20000000 TMAP: time: 0.660, size: 0, buckets: 33554432, erased 20000000 UMAP: time: 2.138, size: 0, buckets: 31160981, erased 20000000 T3: Remove random keys: KMAP: time: 1.973, size: 0, buckets: 33554432, erased 23367671 CMAP: time: 2.020, size: 0, buckets: 33218751, erased 23367671 PMAP: time: 2.940, size: 0, buckets: 33554431, erased 23367671 FMAP: time: 1.147, size: 0, buckets: 33554432, erased 23367671 RMAP: time: 1.941, size: 0, buckets: 33554431, erased 23367671 HMAP: time: 1.135, size: 0, buckets: 33554432, erased 23367671 TMAP: time: 1.064, size: 0, buckets: 33554432, erased 23367671 UMAP: time: 5.632, size: 0, buckets: 31160981, erased 23367671 T4: Iterate random keys: KMAP: time: 0.748, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 CMAP: time: 0.627, size: 23367671, buckets: 33218751, repeats: 8, sum: 4465059465719680 PMAP: time: 0.680, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 FMAP: time: 0.735, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 RMAP: time: 0.464, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 HMAP: time: 0.719, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 TMAP: time: 0.662, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 UMAP: time: 6.168, size: 23367671, buckets: 31160981, repeats: 8, sum: 4465059465719680 T5: Lookup random keys: KMAP: time: 0.943, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 CMAP: time: 0.863, size: 23367671, buckets: 33218751, lookups: 34235332, found: 29040438 PMAP: time: 1.635, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 FMAP: time: 0.969, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 RMAP: time: 1.705, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 HMAP: time: 0.712, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 TMAP: time: 0.584, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 UMAP: time: 1.974, size: 23367671, buckets: 31160981, lookups: 34235332, found: 29040438
-
Is A* just always slow?
std::unordered_map is notorious for being slow. Use a better implementation (I like the flat naps from here, which are the same as abseil’s). The question that needs to be asked too is if you need to use a map.
-
New Boost.Unordered containers have BIG improvements!
A comparison against phmap would also be nice.
-
How to implement static typing in a C++ bytecode VM?
std::unordered_map is perfectly fine. You can do better with external libraries, like parallel hashmap, but these tend to be drop-in replacements
What are some alternatives?
cppcoro - A library of C++ coroutine abstractions for the coroutines TS
Folly - An open-source C++ library developed and used at Facebook.
concurrencpp - Modern concurrency for C++. Tasks, executors, timers and C++20 coroutines to rule them all
robin-hood-hashing - Fast & memory efficient hashtable based on robin hood hashing for C++11/14/17/20
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
libcuckoo - A high-performance, concurrent hash table
Restbed - Corvusoft's Restbed framework brings asynchronous RESTful functionality to C++14 applications.
rust-phf - Compile time static maps for Rust
corrade - C++11 multiplatform utility library
flat_hash_map - A very fast hashtable
Boost.Beast - HTTP and WebSocket built on Boost.Asio in C++11
tracy - Frame profiler