parallel-hashmap VS gtl

Compare parallel-hashmap vs gtl and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
parallel-hashmap gtl
31 5
2,307 89
- -
7.6 7.1
19 days ago 20 days ago
C++ C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

parallel-hashmap

Posts with mentions or reviews of parallel-hashmap. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-13.
  • The One Billion Row Challenge in CUDA: from 17 minutes to 17 seconds
    5 projects | news.ycombinator.com | 13 Apr 2024
    Standard library maps/unordered_maps are themselves notoriously slow anyway. A sparse_hash_map from abseil or parallel-hashmaps[1] would be better.

    [1] https://github.com/greg7mdp/parallel-hashmap

  • My own Concurrent Hash Map picks
    2 projects | /r/cpp | 27 Nov 2022
    Cool! Looking forward to you trying my phmap - and please let me know if you have any question.
  • Boost 1.81 will have boost::unordered_flat_map...
    6 projects | /r/cpp | 31 Oct 2022
    I do this as well in my phmap and gtl implementations. It makes the tables look worse in benchmarks like the above, but prevents really bad surprises occasionally.
  • Comprehensive C++ Hashmap Benchmarks 2022
    3 projects | /r/cpp | 7 Sep 2022
    Thanks a lot for the great benchmark, Martin. Glad you used different hash functions, because I do sacrifice some speed to make sure that the performance of my hash maps doesn't degrade drastically with poor hash functions. Happy to see that my phmap and gtl (the C++20 version) performed well.
  • Can C++ maps be as efficient as Python dictionaries ?
    1 project | /r/Cplusplus | 1 Aug 2022
    I use https://github.com/greg7mdp/parallel-hashmap when I need better performance of maps and sets.
  • How to build a Chess Engine, an interactive guide
    5 projects | news.ycombinator.com | 2 Jul 2022
    Then they should really try https://github.com/greg7mdp/parallel-hashmap, the current state of the art.
  • boost::unordered map is a new king of data structures
    10 projects | /r/cpp | 30 Jun 2022
    Unordered hash map shootout CMAP = https://github.com/tylov/STC KMAP = https://github.com/attractivechaos/klib PMAP = https://github.com/greg7mdp/parallel-hashmap FMAP = https://github.com/skarupke/flat_hash_map RMAP = https://github.com/martinus/robin-hood-hashing HMAP = https://github.com/Tessil/hopscotch-map TMAP = https://github.com/Tessil/robin-map UMAP = std::unordered_map Usage: shootout [n-million=40 key-bits=25] Random keys are in range [0, 2^25). Seed = 1656617916: T1: Insert/update random keys: KMAP: time: 1.949, size: 15064129, buckets: 33554432, sum: 165525449561381 CMAP: time: 1.649, size: 15064129, buckets: 22145833, sum: 165525449561381 PMAP: time: 2.434, size: 15064129, buckets: 33554431, sum: 165525449561381 FMAP: time: 2.112, size: 15064129, buckets: 33554432, sum: 165525449561381 RMAP: time: 1.708, size: 15064129, buckets: 33554431, sum: 165525449561381 HMAP: time: 2.054, size: 15064129, buckets: 33554432, sum: 165525449561381 TMAP: time: 1.645, size: 15064129, buckets: 33554432, sum: 165525449561381 UMAP: time: 6.313, size: 15064129, buckets: 31160981, sum: 165525449561381 T2: Insert sequential keys, then remove them in same order: KMAP: time: 1.173, size: 0, buckets: 33554432, erased 20000000 CMAP: time: 1.651, size: 0, buckets: 33218751, erased 20000000 PMAP: time: 3.840, size: 0, buckets: 33554431, erased 20000000 FMAP: time: 1.722, size: 0, buckets: 33554432, erased 20000000 RMAP: time: 2.359, size: 0, buckets: 33554431, erased 20000000 HMAP: time: 0.849, size: 0, buckets: 33554432, erased 20000000 TMAP: time: 0.660, size: 0, buckets: 33554432, erased 20000000 UMAP: time: 2.138, size: 0, buckets: 31160981, erased 20000000 T3: Remove random keys: KMAP: time: 1.973, size: 0, buckets: 33554432, erased 23367671 CMAP: time: 2.020, size: 0, buckets: 33218751, erased 23367671 PMAP: time: 2.940, size: 0, buckets: 33554431, erased 23367671 FMAP: time: 1.147, size: 0, buckets: 33554432, erased 23367671 RMAP: time: 1.941, size: 0, buckets: 33554431, erased 23367671 HMAP: time: 1.135, size: 0, buckets: 33554432, erased 23367671 TMAP: time: 1.064, size: 0, buckets: 33554432, erased 23367671 UMAP: time: 5.632, size: 0, buckets: 31160981, erased 23367671 T4: Iterate random keys: KMAP: time: 0.748, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 CMAP: time: 0.627, size: 23367671, buckets: 33218751, repeats: 8, sum: 4465059465719680 PMAP: time: 0.680, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 FMAP: time: 0.735, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 RMAP: time: 0.464, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 HMAP: time: 0.719, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 TMAP: time: 0.662, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 UMAP: time: 6.168, size: 23367671, buckets: 31160981, repeats: 8, sum: 4465059465719680 T5: Lookup random keys: KMAP: time: 0.943, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 CMAP: time: 0.863, size: 23367671, buckets: 33218751, lookups: 34235332, found: 29040438 PMAP: time: 1.635, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 FMAP: time: 0.969, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 RMAP: time: 1.705, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 HMAP: time: 0.712, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 TMAP: time: 0.584, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 UMAP: time: 1.974, size: 23367671, buckets: 31160981, lookups: 34235332, found: 29040438
  • Is A* just always slow?
    3 projects | /r/gamedev | 26 Jun 2022
    std::unordered_map is notorious for being slow. Use a better implementation (I like the flat naps from here, which are the same as abseil’s). The question that needs to be asked too is if you need to use a map.
  • New Boost.Unordered containers have BIG improvements!
    6 projects | /r/cpp | 13 Jun 2022
    A comparison against phmap would also be nice.
  • How to implement static typing in a C++ bytecode VM?
    2 projects | /r/ProgrammingLanguages | 8 Jun 2022
    std::unordered_map is perfectly fine. You can do better with external libraries, like parallel hashmap, but these tend to be drop-in replacements

gtl

Posts with mentions or reviews of gtl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-07.
  • Inside boost::concurrent_flat_map
    4 projects | /r/cpp | 7 Jul 2023
    gtl library author here. Very nice writeup! Reading it made me think, and I believe I know why gtl::parallel_flat_hash_map performs comparatively worse for high-skew scenarios (just pushed a fix in gtl).
  • Boost 1.81 will have boost::unordered_flat_map...
    6 projects | /r/cpp | 31 Oct 2022
    I do this as well in my phmap and gtl implementations. It makes the tables look worse in benchmarks like the above, but prevents really bad surprises occasionally.
  • Comprehensive C++ Hashmap Benchmarks 2022
    3 projects | /r/cpp | 7 Sep 2022
    Thanks a lot for the great benchmark, Martin. Glad you used different hash functions, because I do sacrifice some speed to make sure that the performance of my hash maps doesn't degrade drastically with poor hash functions. Happy to see that my phmap and gtl (the C++20 version) performed well.
  • It is now trivial to cache pure functions with highly efficient, concurrent cache.
    1 project | /r/cpp | 3 Jul 2022
    This is very easy to do with the latest version of gtl. And it is extremely efficient, as the caching mechanism uses the parallel hashmap, which internally is divided into N submaps each with its own mutex, reducing mutex contention to a minimum.
  • Updating map_benchmarks: Send your hashmaps!
    13 projects | /r/cpp | 16 Jun 2022
    AFAIK sparsepp has been dropped entirely in favor of the containers in GTL: https://github.com/greg7mdp/gtl

What are some alternatives?

When comparing parallel-hashmap and gtl you can also consider the following projects:

Folly - An open-source C++ library developed and used at Facebook.

eytzinger - Cache-friendly associative STL-like container with an Eytzinger (BFS) layout for C++

robin-hood-hashing - Fast & memory efficient hashtable based on robin hood hashing for C++11/14/17/20

Google Test - GoogleTest - Google Testing and Mocking Framework

libcuckoo - A high-performance, concurrent hash table

flat_hash_map - A very fast hashtable

rust-phf - Compile time static maps for Rust

fph-table - Flash Perfect Hash Table: an implementation of a dynamic perfect hash table, extremely fast for lookup

google-sparsehash - Clone of google-sparsehash

tracy - Frame profiler

libcudacxx - [ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl