robin-hood-hashing VS hashtable-benchmarks

Compare robin-hood-hashing vs hashtable-benchmarks and see what are their differences.

robin-hood-hashing

Fast & memory efficient hashtable based on robin hood hashing for C++11/14/17/20 (by martinus)

hashtable-benchmarks

An Evaluation of Linear Probing Hashtable Algorithms (by senderista)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
robin-hood-hashing hashtable-benchmarks
23 8
1,465 29
- -
0.0 4.7
12 months ago 5 months ago
C++ Java
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

robin-hood-hashing

Posts with mentions or reviews of robin-hood-hashing. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-10.
  • Factor is faster than Zig
    11 projects | news.ycombinator.com | 10 Nov 2023
    In my example the table stores the hash codes themselves instead of the keys (because the hash function is invertible)

    Oh, I see, right. If determining the home bucket is trivial, then the back-shifting method is great. The issue is just that it’s not as much of a general-purpose solution as it may initially seem.

    “With a different algorithm (Robin Hood or bidirectional linear probing), the load factor can be kept well over 90% with good performance, as the benchmarks in the same repo demonstrate.”

    I’ve seen the 90% claim made several times in literature on Robin Hood hash tables. In my experience, the claim is a bit exaggerated, although I suppose it depends on what our idea of “good performance” is. See these benchmarks, which again go up to a maximum load factor of 0.95 (Although boost and Absl forcibly grow/rehash at 0.85-0.9):

    https://strong-starlight-4ea0ed.netlify.app/

    Tsl, Martinus, and CC are all Robin Hood tables (https://github.com/Tessil/robin-map, https://github.com/martinus/robin-hood-hashing, and https://github.com/JacksonAllan/CC, respectively). Absl and Boost are the well-known SIMD-based hash tables. Khash (https://github.com/attractivechaos/klib/blob/master/khash.h) is, I think, an ordinary open-addressing table using quadratic probing. Fastmap is a new, yet-to-be-published design that is fundamentally similar to bytell (https://www.youtube.com/watch?v=M2fKMP47slQ) but also incorporates some aspects of the aforementioned SIMD maps (it caches a 4-bit fragment of the hash code to avoid most key comparisons).

    As you can see, all the Robin Hood maps spike upwards dramatically as the load factor gets high, becoming as much as 5-6 times slower at 0.95 vs 0.5 in one of the benchmarks (uint64_t key, 256-bit struct value: Total time to erase 1000 existing elements with N elements in map). Only the SIMD maps (with Boost being the better performer) and Fastmap appear mostly immune to load factor in all benchmarks, although the SIMD maps do - I believe - use tombstones for deletion.

    I’ve only read briefly about bi-directional linear probing – never experimented with it.

  • If this isn't the perfect data structure, why?
    3 projects | /r/C_Programming | 22 Oct 2023
    From your other comments, it seems like your knowledge of hash tables might be limited to closed-addressing/separate-chaining hash tables. The current frontrunners in high-performance, memory-efficient hash table design all use some form of open addressing, largely to avoid pointer chasing and limit cache misses. In this regard, you want to check our SSE-powered hash tables (such as Abseil, Boost, and Folly/F14), Robin Hood hash tables (such as Martinus and Tessil), or Skarupke (I've recently had a lot of success with a similar design that I will publish here soon and is destined to replace my own Robin Hood hash tables). Also check out existing research/benchmarks here and here. But we a little bit wary of any benchmarks you look at or perform because there are a lot of factors that influence the result (e.g. benchmarking hash tables at a maximum load factor of 0.5 will produce wildly different result to benchmarking them at a load factor of 0.95, just as benchmarking them with integer keys-value pairs will produce different results to benchmarking them with 256-byte key-value pairs). And you need to familiarize yourself with open addressing and different probing strategies (e.g. linear, quadratic) first.
  • boost::unordered standalone
    3 projects | /r/cpp | 9 Jul 2023
    Also, FYI there is robin_hood::unordered_{map,set} which has very high performance, and is header-only and standalone.
  • Solving “Two Sum” in C with a tiny hash table
    1 project | news.ycombinator.com | 29 Jun 2023
    std::unordered_map is notoriously slow, several times slower than a "proper" hashmap implementation like Google's absl or Martin's robin-hood-hashing [1]. That said, std::sort is not the fastest sort implementation, either. It is hard to say which will win.

    [1]: https://github.com/martinus/robin-hood-hashing

  • Convenient Containers v1.0.3: Better compile speed, faster maps and sets
    4 projects | /r/C_Programming | 3 May 2023
    The main advantage of the latest version is that it reduces build time by about 53% (GCC 12.1), based on the comprehensive test suit found in unit_tests.c. This improvement is significant because compile time was previously a drawback of this library, with maps and sets—in particular—compiling slower than their C++ template-based counterparts. I achieved it by refactoring the library to do less work inside API macros and, in particular, use fewer _Generic statements, which seem to be a compile-speed bottleneck. A nice side effect of the refactor is that the library can now more easily be extended with the planned dynamic strings and ordered maps and sets. The other major improvement concerns the performance of maps and sets. Here are some interactive benchmarks[1] comparing CC’s maps to two popular implementations of Robin Hood hash maps in C++ (as well as std::unordered_map as a baseline). They show that CC maps perform roughly on par with those implementations.
  • Effortless Performance Improvements in C++: std:unordered_map
    4 projects | news.ycombinator.com | 2 Mar 2023
    For anyone in a situation where a set/map (or unordered versions) is in a hot part of the code, I'd also highly recommend Robin Hood: https://github.com/martinus/robin-hood-hashing

    It made a huge difference in one of the programs I was running.

  • Inside boost::unordered_flat_map
    11 projects | /r/cpp | 18 Nov 2022
  • What are some cool modern libraries you enjoy using?
    32 projects | /r/cpp | 18 Sep 2022
    Oh my bad. Still thought -- your name.. it looks very familiar to me. Are you the robin_hood hashing guy perhaps? Yes you are! My bad -- https://github.com/martinus/robin-hood-hashing.
  • Performance comparison: counting words in Python, C/C++, Awk, Rust, and more
    12 projects | news.ycombinator.com | 24 Jul 2022
    Got a bit better C++ version here which uses a couple libraries instead of std:: stuff - https://gist.github.com/jcelerier/74dfd473bccec8f1bd5d78be5a... ; boost, fmt and https://github.com/martinus/robin-hood-hashing

        $ g++ -I robin-hood-hashing/src/include -O2 -flto -std=c++20 -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -lfmt
  • A fast & densely stored hashmap and hashset based on robin-hood backward shift deletion
    5 projects | /r/cpp | 4 Jul 2022
    The implementation is mostly inspired by this comment and lessons learned from my older robin-hood-hashing hashmap.

hashtable-benchmarks

Posts with mentions or reviews of hashtable-benchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-20.
  • Building a faster hash table for high performance SQL joins
    3 projects | news.ycombinator.com | 20 Dec 2023
    Since the blog post mentioned a PR to replace linear probing with Robin Hood, I just wanted to mention that I found bidirectional linear probing to outperform Robin Hood across the board in my Java integer set benchmarks:

    https://github.com/senderista/hashtable-benchmarks/blob/mast...

    https://github.com/senderista/hashtable-benchmarks/wiki/64-b...

  • Ask HN: Who wants to be hired? (December 2023)
    26 projects | news.ycombinator.com | 1 Dec 2023
    https://homes.cs.washington.edu/~magda/papers/wang-cidr17.pd...

    I'm most interested in developing high-performance database engines in low-level languages, but open to any challenging systems programming project. I've been working in C++ for the last 3 years, but have written nontrivial projects in Rust and Java as well (e.g., https://github.com/senderista/rotated-array-set, https://github.com/senderista/hashtable-benchmarks). I would enjoy using Rust or Zig on a new project, but I consider the project itself to be much more important than the language it's written in. I am not interested in cryptocurrency, adtech, or fintech projects.

  • Factor is faster than Zig
    11 projects | news.ycombinator.com | 10 Nov 2023
    Thanks for the details on your benchmarks. I would like sometime to extend BLP to a more generic setting; as I said I think any trick used with RH would also work with BLP. I just used an integer set because that's all I needed for my use case and it was easy to implement several different approaches for benchmarking. As you note, it favors use cases where the hash function is cheap (or invertible) and elements are cheap to move around.

    About your question on load factors: no, the benchmarks are measuring exactly what they claim to be. The hash table constructor divides max data size by load factor to get the table size (https://github.com/senderista/hashtable-benchmarks/blob/mast...), and the benchmark code instantiates each hash table for exactly the measured data set size and load factor (https://github.com/senderista/hashtable-benchmarks/blob/mast...).

    I can't explain the peaks around 1M in many of the plots; I didn't investigate them at the time and I don't have time now. It could be a JVM artifact, but I did try to use JMH "best practices", and there's no dynamic memory allocation or GC happening during the benchmark at all. It would be interesting to port these tables to Rust and repeat the measurements with Criterion. For more informative graphs I might try a log-linear approach: divide the intervals between the logarithmically spaced data sizes into a fixed number of subintervals (say 4).

  • Inside boost::unordered_flat_map
    11 projects | /r/cpp | 18 Nov 2022
    I think "bidirectional linear probing" is an underrated approach (and much simpler): https://github.com/senderista/hashtable-benchmarks/blob/master/src/main/java/set/int64/BLPLongHashSet.java
  • A fast & densely stored hashmap and hashset based on robin-hood backward shift deletion
    5 projects | /r/cpp | 4 Jul 2022
    I will probably never get around to porting my bidirectional linear probing integer hash set from Java to C++, but I hope someone can try adapting BLP to general C++ hashmaps and hashsets, because it significantly outperforms Robin Hood in my benchmarks.
  • Ask HN: Who wants to be hired? (March 2022)
    14 projects | news.ycombinator.com | 1 Mar 2022
    https://homes.cs.washington.edu/~magda/papers/wang-cidr17.pd...

    I'm most interested in developing high-performance database engines in low-level languages, but open to any challenging systems programming project. I've been working in C++ for the last 2 years, but have written nontrivial projects in Rust and Java as well (e.g., https://github.com/senderista/rotated-array-set, https://github.com/senderista/hashtable-benchmarks). I would enjoy using Rust or Zig on a new project, but I consider the project itself to be much more important than the language it's written in. I am not interested in cryptocurrency, adtech, or fintech projects.

What are some alternatives?

When comparing robin-hood-hashing and hashtable-benchmarks you can also consider the following projects:

parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.

unordered_dense - A fast & densely stored hashmap and hashset based on robin-hood backward shift deletion

STL - MSVC's implementation of the C++ Standard Library.

myria - Myria is a scalable Analytics-as-a-Service platform based on relational algebra.

robin-map - C++ implementation of a fast hash map and hash set using robin hood hashing

js2scheme

xxHash - Extremely fast non-cryptographic hash algorithm

flat_hash_map - A very fast hashtable

C++ Format - A modern formatting library

nafeez.xyz - ⚡ My personal website.

tracy - Frame profiler

Personal-Site-Gourav.io - My personal site & blog made with NextJS, Typescript, Tailwind CSS, MDX, Notion as CMS. Deployed on Vercel : https://gourav.io