-
fph-table
Flash Perfect Hash Table: an implementation of a dynamic perfect hash table, extremely fast for lookup
I believe that when the number of elements is larger than 4 (a rough estimation), the associative linear table won't be faster than ska::flat_hash_map or fph-table with the identity hash function. If you look at the benchmark results, you will find that the average lookup time may well be less than 2 nanoseconds when item number is smaller than one thousand on morden CPUs. For these two hash tables, there are only about ten instructions in the critical path of lookup. And this should be faster than the linear search in a associative table, where there are a lot of branches and comparing instructions. However, you should benchmark it youself to get the real conclusion. This is just a simple analysis on paper from mine. By the way, the associative table can be faster if it is implemented with hardware circuits or SIMD instructions.
-
JetBrains
Tell us how you use coding tools. You may win a prize! Are you a developer or a data analyst? Share your thoughts about your coding tools in our short survey and get a chance to win prizes!
-
google sparsehash would be interesting to see. From what I understand it's the predecessor to the Abseil containers. Would be nice to see a comparison
-
Also, I'm testing https://github.com/greg7mdp/sparsepp which is based on google's sparsehash
-
AFAIK sparsepp has been dropped entirely in favor of the containers in GTL: https://github.com/greg7mdp/gtl
-
https://github.com/Jiwan/dense_hash_map is also a good flat hash map.
-
-
Sevalla
Deploy and host your apps and databases, now with $50 credit! Sevalla is the PaaS you have been looking for! Advanced deployment pipelines, usage-based pricing, preview apps, templates, human support by developers, and much more!
-
https://github.com/mikekazakov/eytzinger should always beat flat_map except for very small maps. That said, for very small maps a simple linear search probably beats everything.
-
I believe that when the number of elements is larger than 4 (a rough estimation), the associative linear table won't be faster than ska::flat_hash_map or fph-table with the identity hash function. If you look at the benchmark results, you will find that the average lookup time may well be less than 2 nanoseconds when item number is smaller than one thousand on morden CPUs. For these two hash tables, there are only about ten instructions in the critical path of lookup. And this should be faster than the linear search in a associative table, where there are a lot of branches and comparing instructions. However, you should benchmark it youself to get the real conclusion. This is just a simple analysis on paper from mine. By the way, the associative table can be faster if it is implemented with hardware circuits or SIMD instructions.
-
hashtable-bench
A benchmark for hash tables and hash functions in C++, evaluate on different data as comprehensively as possible
I believe that when the number of elements is larger than 4 (a rough estimation), the associative linear table won't be faster than ska::flat_hash_map or fph-table with the identity hash function. If you look at the benchmark results, you will find that the average lookup time may well be less than 2 nanoseconds when item number is smaller than one thousand on morden CPUs. For these two hash tables, there are only about ten instructions in the critical path of lookup. And this should be faster than the linear search in a associative table, where there are a lot of branches and comparing instructions. However, you should benchmark it youself to get the real conclusion. This is just a simple analysis on paper from mine. By the way, the associative table can be faster if it is implemented with hardware circuits or SIMD instructions.
-
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
I would be interested to see how good is https://github.com/llvm/llvm-project/blob/main/llvm/include/llvm/ADT/DenseMap.h
Related posts
-
A Fast, Densely Stored Hashmap Based on Robin-Hood Backward Shift Deletion
-
unordered_dense: A Fast & Densely Stored Hashmap And Hashset Based On Robin-Hood Backward Shift Deletion
-
unordered_dense: A fast, densely stored hashmap based on backward shift deletion
-
boost::unordered standalone
-
boost::unordered map is a new king of data structures