CC
bfcpp
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CC
-
preprocessor stuff - compile time pasting into other files
With extendible macros, you could achieve the following:
-
Factor is faster than Zig
In my example the table stores the hash codes themselves instead of the keys (because the hash function is invertible)
Oh, I see, right. If determining the home bucket is trivial, then the back-shifting method is great. The issue is just that it’s not as much of a general-purpose solution as it may initially seem.
“With a different algorithm (Robin Hood or bidirectional linear probing), the load factor can be kept well over 90% with good performance, as the benchmarks in the same repo demonstrate.”
I’ve seen the 90% claim made several times in literature on Robin Hood hash tables. In my experience, the claim is a bit exaggerated, although I suppose it depends on what our idea of “good performance” is. See these benchmarks, which again go up to a maximum load factor of 0.95 (Although boost and Absl forcibly grow/rehash at 0.85-0.9):
https://strong-starlight-4ea0ed.netlify.app/
Tsl, Martinus, and CC are all Robin Hood tables (https://github.com/Tessil/robin-map, https://github.com/martinus/robin-hood-hashing, and https://github.com/JacksonAllan/CC, respectively). Absl and Boost are the well-known SIMD-based hash tables. Khash (https://github.com/attractivechaos/klib/blob/master/khash.h) is, I think, an ordinary open-addressing table using quadratic probing. Fastmap is a new, yet-to-be-published design that is fundamentally similar to bytell (https://www.youtube.com/watch?v=M2fKMP47slQ) but also incorporates some aspects of the aforementioned SIMD maps (it caches a 4-bit fragment of the hash code to avoid most key comparisons).
As you can see, all the Robin Hood maps spike upwards dramatically as the load factor gets high, becoming as much as 5-6 times slower at 0.95 vs 0.5 in one of the benchmarks (uint64_t key, 256-bit struct value: Total time to erase 1000 existing elements with N elements in map). Only the SIMD maps (with Boost being the better performer) and Fastmap appear mostly immune to load factor in all benchmarks, although the SIMD maps do - I believe - use tombstones for deletion.
I’ve only read briefly about bi-directional linear probing – never experimented with it.
-
If this isn't the perfect data structure, why?
From your other comments, it seems like your knowledge of hash tables might be limited to closed-addressing/separate-chaining hash tables. The current frontrunners in high-performance, memory-efficient hash table design all use some form of open addressing, largely to avoid pointer chasing and limit cache misses. In this regard, you want to check our SSE-powered hash tables (such as Abseil, Boost, and Folly/F14), Robin Hood hash tables (such as Martinus and Tessil), or Skarupke (I've recently had a lot of success with a similar design that I will publish here soon and is destined to replace my own Robin Hood hash tables). Also check out existing research/benchmarks here and here. But we a little bit wary of any benchmarks you look at or perform because there are a lot of factors that influence the result (e.g. benchmarking hash tables at a maximum load factor of 0.5 will produce wildly different result to benchmarking them at a load factor of 0.95, just as benchmarking them with integer keys-value pairs will produce different results to benchmarking them with 256-byte key-value pairs). And you need to familiarize yourself with open addressing and different probing strategies (e.g. linear, quadratic) first.
- Convenient Containers: A usability-oriented generic container library
-
[Noob Question] How do C programmers get around not having hash maps?
CC (Full disclosure: I authored this one)
-
New C features in GCC 13
If you're using C23 or have typeof (so GCC or Clang), then yet another approach is to define a type that aliases the specified type if it is unique or otherwise becomes a "dummy" type. Here's what that looks like in CC:
-
Convenient Containers v1.0.3: Better compile speed, faster maps and sets
I’d like to share version 1.0.3 of Convenient Containers (CC), my generic container library. The library was previously discussed here and here. As explained elsewhere,
-
Popular Data Structure Libraries in C ?
Convenient Containers (CC) - I'm the author of this one.
-
So what's the best data structures and algorithms library for C?
"Using macros" is a broad description that covers multiple paradigms. There are libraries that use macros in combination with typed pointers and functions that take void* parameters to provide some degree of API genericity and type safety at the same time (e.g. stb_ds and, as you mentioned, my own CC). There are libraries that use macros (or #include directives) to manually instantiate templates (e.g. STC, M*LIB, and Pottery). And then there are libraries that are implemented entirely or almost entirely as macros (e.g. uthash).
-
How do you deal with the extra verbosity of C?
Shameless plug: Take a look a my library Convenient Containers, which solves this exact problem within the (narrow) domain of data structures.
bfcpp
-
Better C Generics: The Extendible _Generic
The preprocessor is actually quite fast, as long as you are just doing primitive replacement things. I benchmarked my preprocessor brainfuck interpreter (without optimizations) against a constexpr brainfuck interpreter (without optimizations), and it beat constexpr for interpreting smaller programs. isort4 for example is a brainfuck program that does insertion sort on 45 inputs, and the preprocessor implementation was more than twice as fast as the constexpr one. Larger programs are slower to interpret with the preprocessor, because it always needs to copy the entire program code.
-
Conditional preprocessor macro, anyone?
PS: I wrote a bit of an explanation for my preprocessor brainfuck interpreter, maybe you can learn a few tricks from that: https://github.com/camel-cdr/bfcpp/blob/main/TUTORIAL.md
-
What’s the most “abusive” code you’ve ever written?
Since you mentioned macros: I wrote a brainfuck interpreter using only the preprocessor, so it interprets brainfuck at compile time. The entire thing is a huge abuse of macros: https://github.com/camel-cdr/bfcpp
- I wrote an optimizing brainfuck interpreter using only the C Preprocessor, here is how
- Who needs C++? C preprocessor meta programming is the future.
- Show HN: Optimizing brainfuck interpreter using only the C preprocessor
- Let's write an optimizing Brainfuck interpreter using only the C Preprocessor
What are some alternatives?
rust-bindgen - Automatically generates Rust FFI bindings to C (and some C++) libraries.
bfcc - BrainFuck Compiler Challenge
mlib - Library of generic and type safe containers in pure C language (C99 or C11) for a wide collection of container (comparable to the C++ STL).
space-nerds-in-space - Multi-player spaceship bridge simulator. Captain your starship through adventures with your friends. See https://smcameron.github.io/space-nerds-in-space
stent - Completely avoid dangling pointers in C.
nim-brainfuck - A brainfuck interpreter and compiler written in Nim
SDS - Simple Dynamic Strings library for C
gpp - GPP, a generic preprocessor
Generic-Data-Structures - A set of Data Structures for the C programming language
i-use-arch-btw - "I use Arch btw" but it's a Turing-complete programming language.
stb - stb single-file public domain libraries for C/C++
metalang99 - Full-blown preprocessor metaprogramming