parallel-hashmap VS abseil-cpp

Compare parallel-hashmap vs abseil-cpp and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
parallel-hashmap abseil-cpp
31 54
2,316 13,917
- 2.4%
7.8 9.5
21 days ago 4 days ago
C++ C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

parallel-hashmap

Posts with mentions or reviews of parallel-hashmap. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-13.
  • The One Billion Row Challenge in CUDA: from 17 minutes to 17 seconds
    5 projects | news.ycombinator.com | 13 Apr 2024
    Standard library maps/unordered_maps are themselves notoriously slow anyway. A sparse_hash_map from abseil or parallel-hashmaps[1] would be better.

    [1] https://github.com/greg7mdp/parallel-hashmap

  • My own Concurrent Hash Map picks
    2 projects | /r/cpp | 27 Nov 2022
    Cool! Looking forward to you trying my phmap - and please let me know if you have any question.
  • Boost 1.81 will have boost::unordered_flat_map...
    6 projects | /r/cpp | 31 Oct 2022
    I do this as well in my phmap and gtl implementations. It makes the tables look worse in benchmarks like the above, but prevents really bad surprises occasionally.
  • Comprehensive C++ Hashmap Benchmarks 2022
    3 projects | /r/cpp | 7 Sep 2022
    Thanks a lot for the great benchmark, Martin. Glad you used different hash functions, because I do sacrifice some speed to make sure that the performance of my hash maps doesn't degrade drastically with poor hash functions. Happy to see that my phmap and gtl (the C++20 version) performed well.
  • Can C++ maps be as efficient as Python dictionaries ?
    1 project | /r/Cplusplus | 1 Aug 2022
    I use https://github.com/greg7mdp/parallel-hashmap when I need better performance of maps and sets.
  • How to build a Chess Engine, an interactive guide
    5 projects | news.ycombinator.com | 2 Jul 2022
    Then they should really try https://github.com/greg7mdp/parallel-hashmap, the current state of the art.
  • boost::unordered map is a new king of data structures
    10 projects | /r/cpp | 30 Jun 2022
    Unordered hash map shootout CMAP = https://github.com/tylov/STC KMAP = https://github.com/attractivechaos/klib PMAP = https://github.com/greg7mdp/parallel-hashmap FMAP = https://github.com/skarupke/flat_hash_map RMAP = https://github.com/martinus/robin-hood-hashing HMAP = https://github.com/Tessil/hopscotch-map TMAP = https://github.com/Tessil/robin-map UMAP = std::unordered_map Usage: shootout [n-million=40 key-bits=25] Random keys are in range [0, 2^25). Seed = 1656617916: T1: Insert/update random keys: KMAP: time: 1.949, size: 15064129, buckets: 33554432, sum: 165525449561381 CMAP: time: 1.649, size: 15064129, buckets: 22145833, sum: 165525449561381 PMAP: time: 2.434, size: 15064129, buckets: 33554431, sum: 165525449561381 FMAP: time: 2.112, size: 15064129, buckets: 33554432, sum: 165525449561381 RMAP: time: 1.708, size: 15064129, buckets: 33554431, sum: 165525449561381 HMAP: time: 2.054, size: 15064129, buckets: 33554432, sum: 165525449561381 TMAP: time: 1.645, size: 15064129, buckets: 33554432, sum: 165525449561381 UMAP: time: 6.313, size: 15064129, buckets: 31160981, sum: 165525449561381 T2: Insert sequential keys, then remove them in same order: KMAP: time: 1.173, size: 0, buckets: 33554432, erased 20000000 CMAP: time: 1.651, size: 0, buckets: 33218751, erased 20000000 PMAP: time: 3.840, size: 0, buckets: 33554431, erased 20000000 FMAP: time: 1.722, size: 0, buckets: 33554432, erased 20000000 RMAP: time: 2.359, size: 0, buckets: 33554431, erased 20000000 HMAP: time: 0.849, size: 0, buckets: 33554432, erased 20000000 TMAP: time: 0.660, size: 0, buckets: 33554432, erased 20000000 UMAP: time: 2.138, size: 0, buckets: 31160981, erased 20000000 T3: Remove random keys: KMAP: time: 1.973, size: 0, buckets: 33554432, erased 23367671 CMAP: time: 2.020, size: 0, buckets: 33218751, erased 23367671 PMAP: time: 2.940, size: 0, buckets: 33554431, erased 23367671 FMAP: time: 1.147, size: 0, buckets: 33554432, erased 23367671 RMAP: time: 1.941, size: 0, buckets: 33554431, erased 23367671 HMAP: time: 1.135, size: 0, buckets: 33554432, erased 23367671 TMAP: time: 1.064, size: 0, buckets: 33554432, erased 23367671 UMAP: time: 5.632, size: 0, buckets: 31160981, erased 23367671 T4: Iterate random keys: KMAP: time: 0.748, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 CMAP: time: 0.627, size: 23367671, buckets: 33218751, repeats: 8, sum: 4465059465719680 PMAP: time: 0.680, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 FMAP: time: 0.735, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 RMAP: time: 0.464, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 HMAP: time: 0.719, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 TMAP: time: 0.662, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 UMAP: time: 6.168, size: 23367671, buckets: 31160981, repeats: 8, sum: 4465059465719680 T5: Lookup random keys: KMAP: time: 0.943, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 CMAP: time: 0.863, size: 23367671, buckets: 33218751, lookups: 34235332, found: 29040438 PMAP: time: 1.635, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 FMAP: time: 0.969, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 RMAP: time: 1.705, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 HMAP: time: 0.712, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 TMAP: time: 0.584, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 UMAP: time: 1.974, size: 23367671, buckets: 31160981, lookups: 34235332, found: 29040438
  • Is A* just always slow?
    3 projects | /r/gamedev | 26 Jun 2022
    std::unordered_map is notorious for being slow. Use a better implementation (I like the flat naps from here, which are the same as abseil’s). The question that needs to be asked too is if you need to use a map.
  • New Boost.Unordered containers have BIG improvements!
    6 projects | /r/cpp | 13 Jun 2022
    A comparison against phmap would also be nice.
  • How to implement static typing in a C++ bytecode VM?
    2 projects | /r/ProgrammingLanguages | 8 Jun 2022
    std::unordered_map is perfectly fine. You can do better with external libraries, like parallel hashmap, but these tend to be drop-in replacements

abseil-cpp

Posts with mentions or reviews of abseil-cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-27.
  • Sane C++ Libraries
    7 projects | news.ycombinator.com | 27 Jan 2024
  • Open source collection of Google's C++ libraries
    1 project | news.ycombinator.com | 5 Jan 2024
  • Is Ada safer than Rust?
    2 projects | news.ycombinator.com | 2 Dec 2023
  • Appending to an std:string character-by-character: how does the capacity grow?
    2 projects | news.ycombinator.com | 26 Oct 2023
    Yeah, it's nice! And Abseil does it, IFF you use LLVM libc++.

    https://github.com/abseil/abseil-cpp/blob/master/absl/string...

    The standard adopted it as resize_and_overwrite. Which I think is a little clunky.

  • Shaving 40% Off Google’s B-Tree Implementation with Go Generics
    3 projects | news.ycombinator.com | 19 Sep 2023
    This may be confusing to those familiar with Google's libraries. The baseline is the Go BTree, which I personally never heard of until just now, not the C++ absl::btree_set. The benchmarks aren't directly comparable, but the C++ version also comes with good microbenchmark coverage.

    https://github.com/google/btree

    https://github.com/abseil/abseil-cpp/blob/master/absl/contai...

  • Faster Sorting Beyond DeepMind’s AlphaDev
    1 project | news.ycombinator.com | 19 Sep 2023
  • “Once” one-time concurrent initialization with an integer
    2 projects | news.ycombinator.com | 1 Aug 2023
    An implementation of call_once that accommodates callbacks that throw: https://github.com/abseil/abseil-cpp/blob/master/absl/base/c...
  • [R] AlphaDev discovers faster sorting algorithms
    2 projects | /r/MachineLearning | 7 Jun 2023
    I wouldn't say it's that cryptic. It's just a few bitwise rotations/shifts/xor operations.
  • Deepmind Alphadev: Faster sorting algorithms discovered using deep RL
    3 projects | news.ycombinator.com | 7 Jun 2023
    You can see hashing optimizations as well https://www.deepmind.com/blog/alphadev-discovers-faster-sort..., https://github.com/abseil/abseil-cpp/commit/74eee2aff683cc7d...

    I was one of the members who reviewed expertly what has been done both in sorting and hashing. Overall it's more about assembly, finding missed compiler optimizations and balancing between correctness and distribution (in hashing in particular).

    It was not revolutionary in a sense it hasn't found completely new approaches but converged to something incomprehensible for humans but relatively good for performance which proves the point that optimal programs are very inhuman.

    Note that for instructions in sorting, removing them does not always lead to better performance, for example, instructions can run in parallel and the effect can be less profound. Benchmarks can lie and compiler could do something differently when recompiling the sort3 function which was changed. There was some evidence that the effect can come from the other side.

    For hashing it was even funnier, very small strings up to 64 bit already used 3 instructions like add some constant -> multiply 64x64 -> xor upper/lower. For bigger ones the question becomes more complicated, that's why 9-16 was a better spot and it simplified from 2 multiplications to just one and a rotation. Distribution on real workloads was good, it almost passed smhasher and we decided it was good enough to try out in prod. We did not rollback as you can see from abseil :)

    But even given all that, it was fascinating to watch how this system was searching and was able to find particular programs can be further simplified. Kudos to everyone involved, it's a great incremental change that can bring more results in the future.

  • Backward compatible implementations of newer standards constructs?
    5 projects | /r/cpp_questions | 24 May 2023
    Check out https://abseil.io. It offers absl::optional, which is a backport of std::optional.

What are some alternatives?

When comparing parallel-hashmap and abseil-cpp you can also consider the following projects:

Folly - An open-source C++ library developed and used at Facebook.

robin-hood-hashing - Fast & memory efficient hashtable based on robin hood hashing for C++11/14/17/20

Boost - Super-project for modularized Boost

libcuckoo - A high-performance, concurrent hash table

spdlog - Fast C++ logging library.

rust-phf - Compile time static maps for Rust

Qt - Qt Base (Core, Gui, Widgets, Network, ...)

flat_hash_map - A very fast hashtable

EASTL - Obsolete repo, please go to: https://github.com/electronicarts/EASTL

tracy - Frame profiler

BDE - Basic Development Environment - a set of foundational C++ libraries used at Bloomberg.