Klib VS pottery

Compare Klib vs pottery and see what are their differences.

pottery

Pottery - A container and algorithm template library in C (by ludocode)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Klib pottery
23 14
4,010 119
- -
4.3 1.8
2 days ago about 2 years ago
C C
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Klib

Posts with mentions or reviews of Klib. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-10.
  • Factor is faster than Zig
    11 projects | news.ycombinator.com | 10 Nov 2023
    In my example the table stores the hash codes themselves instead of the keys (because the hash function is invertible)

    Oh, I see, right. If determining the home bucket is trivial, then the back-shifting method is great. The issue is just that it’s not as much of a general-purpose solution as it may initially seem.

    “With a different algorithm (Robin Hood or bidirectional linear probing), the load factor can be kept well over 90% with good performance, as the benchmarks in the same repo demonstrate.”

    I’ve seen the 90% claim made several times in literature on Robin Hood hash tables. In my experience, the claim is a bit exaggerated, although I suppose it depends on what our idea of “good performance” is. See these benchmarks, which again go up to a maximum load factor of 0.95 (Although boost and Absl forcibly grow/rehash at 0.85-0.9):

    https://strong-starlight-4ea0ed.netlify.app/

    Tsl, Martinus, and CC are all Robin Hood tables (https://github.com/Tessil/robin-map, https://github.com/martinus/robin-hood-hashing, and https://github.com/JacksonAllan/CC, respectively). Absl and Boost are the well-known SIMD-based hash tables. Khash (https://github.com/attractivechaos/klib/blob/master/khash.h) is, I think, an ordinary open-addressing table using quadratic probing. Fastmap is a new, yet-to-be-published design that is fundamentally similar to bytell (https://www.youtube.com/watch?v=M2fKMP47slQ) but also incorporates some aspects of the aforementioned SIMD maps (it caches a 4-bit fragment of the hash code to avoid most key comparisons).

    As you can see, all the Robin Hood maps spike upwards dramatically as the load factor gets high, becoming as much as 5-6 times slower at 0.95 vs 0.5 in one of the benchmarks (uint64_t key, 256-bit struct value: Total time to erase 1000 existing elements with N elements in map). Only the SIMD maps (with Boost being the better performer) and Fastmap appear mostly immune to load factor in all benchmarks, although the SIMD maps do - I believe - use tombstones for deletion.

    I’ve only read briefly about bi-directional linear probing – never experimented with it.

  • A simple hash table in C
    7 projects | news.ycombinator.com | 13 Jun 2023
  • So what's the best data structures and algorithms library for C?
    8 projects | /r/C_Programming | 15 Mar 2023
    It could be that the cost of the function calls, either directly or via a pointer, is drowned out by the cost of the one or more cache misses inevitably invoked with every hash table lookup. But I don't want to say too much before I've finished my benchmarking project and published the results. So let me just caution against laser-focusing on whether the comparator and hash function are/can be inlined. For example stb_ds uses a hardcoded hash function that presumably gets inlined, but in my benchmarking (again, I'll publish it here in coming weeks) shows it to be generally a poor performer (in comparison to not just CC, the current version of which doesn't necessarily inline those functions, but also STC, khash, and the C++ Robin Hood hash tables I tested).
  • Generic dynamic array in 60 lines of C
    4 projects | news.ycombinator.com | 28 Feb 2023
    Not an entirely uncommon idea. I've written one.

    There's also a well-known one here, in klib: https://github.com/attractivechaos/klib/blob/master/kvec.h

  • C_dictionary: A simple dynamically typed and sized hashmap in C - feedback welcome
    10 projects | /r/C_Programming | 23 Jan 2023
  • Inside boost::unordered_flat_map
    11 projects | /r/cpp | 18 Nov 2022
  • The New Ghostscript PDF Interpreter
    4 projects | news.ycombinator.com | 31 Jul 2022
    Code reuse is achievable by (mis)using the preprocessor system. It is possible to build a somewhat usable API, even for intrusive data structures. (eg. the linux kernel and klib[1])

    I do agree that generics are required for modern programming, but for some, the cost of complexity of modern languages (compared to C) and the importance of compatibility seem to outweigh the benefits.

    [1]: http://attractivechaos.github.io/klib

  • C LIBRARY
    2 projects | /r/C_Programming | 10 Jul 2022
  • boost::unordered map is a new king of data structures
    10 projects | /r/cpp | 30 Jun 2022
    Unordered hash map shootout CMAP = https://github.com/tylov/STC KMAP = https://github.com/attractivechaos/klib PMAP = https://github.com/greg7mdp/parallel-hashmap FMAP = https://github.com/skarupke/flat_hash_map RMAP = https://github.com/martinus/robin-hood-hashing HMAP = https://github.com/Tessil/hopscotch-map TMAP = https://github.com/Tessil/robin-map UMAP = std::unordered_map Usage: shootout [n-million=40 key-bits=25] Random keys are in range [0, 2^25). Seed = 1656617916: T1: Insert/update random keys: KMAP: time: 1.949, size: 15064129, buckets: 33554432, sum: 165525449561381 CMAP: time: 1.649, size: 15064129, buckets: 22145833, sum: 165525449561381 PMAP: time: 2.434, size: 15064129, buckets: 33554431, sum: 165525449561381 FMAP: time: 2.112, size: 15064129, buckets: 33554432, sum: 165525449561381 RMAP: time: 1.708, size: 15064129, buckets: 33554431, sum: 165525449561381 HMAP: time: 2.054, size: 15064129, buckets: 33554432, sum: 165525449561381 TMAP: time: 1.645, size: 15064129, buckets: 33554432, sum: 165525449561381 UMAP: time: 6.313, size: 15064129, buckets: 31160981, sum: 165525449561381 T2: Insert sequential keys, then remove them in same order: KMAP: time: 1.173, size: 0, buckets: 33554432, erased 20000000 CMAP: time: 1.651, size: 0, buckets: 33218751, erased 20000000 PMAP: time: 3.840, size: 0, buckets: 33554431, erased 20000000 FMAP: time: 1.722, size: 0, buckets: 33554432, erased 20000000 RMAP: time: 2.359, size: 0, buckets: 33554431, erased 20000000 HMAP: time: 0.849, size: 0, buckets: 33554432, erased 20000000 TMAP: time: 0.660, size: 0, buckets: 33554432, erased 20000000 UMAP: time: 2.138, size: 0, buckets: 31160981, erased 20000000 T3: Remove random keys: KMAP: time: 1.973, size: 0, buckets: 33554432, erased 23367671 CMAP: time: 2.020, size: 0, buckets: 33218751, erased 23367671 PMAP: time: 2.940, size: 0, buckets: 33554431, erased 23367671 FMAP: time: 1.147, size: 0, buckets: 33554432, erased 23367671 RMAP: time: 1.941, size: 0, buckets: 33554431, erased 23367671 HMAP: time: 1.135, size: 0, buckets: 33554432, erased 23367671 TMAP: time: 1.064, size: 0, buckets: 33554432, erased 23367671 UMAP: time: 5.632, size: 0, buckets: 31160981, erased 23367671 T4: Iterate random keys: KMAP: time: 0.748, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 CMAP: time: 0.627, size: 23367671, buckets: 33218751, repeats: 8, sum: 4465059465719680 PMAP: time: 0.680, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 FMAP: time: 0.735, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 RMAP: time: 0.464, size: 23367671, buckets: 33554431, repeats: 8, sum: 4465059465719680 HMAP: time: 0.719, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 TMAP: time: 0.662, size: 23367671, buckets: 33554432, repeats: 8, sum: 4465059465719680 UMAP: time: 6.168, size: 23367671, buckets: 31160981, repeats: 8, sum: 4465059465719680 T5: Lookup random keys: KMAP: time: 0.943, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 CMAP: time: 0.863, size: 23367671, buckets: 33218751, lookups: 34235332, found: 29040438 PMAP: time: 1.635, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 FMAP: time: 0.969, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 RMAP: time: 1.705, size: 23367671, buckets: 33554431, lookups: 34235332, found: 29040438 HMAP: time: 0.712, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 TMAP: time: 0.584, size: 23367671, buckets: 33554432, lookups: 34235332, found: 29040438 UMAP: time: 1.974, size: 23367671, buckets: 31160981, lookups: 34235332, found: 29040438
  • C++ containers but in C
    8 projects | /r/C_Programming | 8 Mar 2022

pottery

Posts with mentions or reviews of pottery. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-22.
  • Popular Data Structure Libraries in C ?
    13 projects | /r/C_Programming | 22 Mar 2023
    Pottery - The page for open hash map reads "Documentation still needs to be written. In the meantime check out the examples."
  • So what's the best data structures and algorithms library for C?
    8 projects | /r/C_Programming | 15 Mar 2023
    "Using macros" is a broad description that covers multiple paradigms. There are libraries that use macros in combination with typed pointers and functions that take void* parameters to provide some degree of API genericity and type safety at the same time (e.g. stb_ds and, as you mentioned, my own CC). There are libraries that use macros (or #include directives) to manually instantiate templates (e.g. STC, M*LIB, and Pottery). And then there are libraries that are implemented entirely or almost entirely as macros (e.g. uthash).
  • Better C Generics: The Extendible _Generic
    9 projects | /r/C_Programming | 28 Jan 2023
    The prototype of CC used this mechanism to provide a generic API for types instantiated via templates (so basically like other container libraries, but with an extendible-_Generic-based API laid over the top of the generated types). This approach has some significant advantages over the approach CC now uses, but I got a bit obsessed with eliminating the need to manually instantiate templates.
  • C_dictionary: A simple dynamically typed and sized hashmap in C - feedback welcome
    10 projects | /r/C_Programming | 23 Jan 2023
  • Common libraries and data structures for C
    15 projects | news.ycombinator.com | 16 May 2022
    I think it's common for C programmers to roll their own. I did the same [0].

    I went pretty deep into composable C templates to build mine so it's more powerful than most. The containers can handle non-bitwise-movable types with full C++-style lifecycle functions and such, and the sort algorithms can handle dynamic and non-contiguous arrays (they are powerful enough to implement qsort() [1], which is more than I can say for any other C sort templates I've seen.) My reasoning for the complexity at the time was that any powerful container library is going to be reasonably complex in implementation (as anyone who's looked at STL source code knows), so it just needs to be encapsulated behind a good interface.

    I'm not so sure that's true anymore. These sorts of simpler libraries like the one linked here definitely seem to be more popular among C programmers. I think if people are using C, it's not just the C++ language complexity they want to get away from, but also the implementation complexity of libraries and such. There's a balance to be had for sure, and I think the balance varies from person to person, which is why no library has emerged as the de facto standard for containers in C.

    [0]: https://github.com/ludocode/pottery

  • C++ containers but in C
    8 projects | /r/C_Programming | 8 Mar 2022
  • Pottery – A pure C, include-only, type-safe, algorithm template library
    1 project | news.ycombinator.com | 23 Nov 2021
  • Ask HN: What you up to? (Who doesn't want to be hired?)
    25 projects | news.ycombinator.com | 1 Nov 2021
  • Type-safe generic data structures in C
    6 projects | news.ycombinator.com | 8 Apr 2021
    Yes! The include style of templates in C is way better than the old way of huge macros to instantiate code. The template code can look mostly like idiomatic C, it interacts way better with a debugger, it gives better compiler errors... everything about it is better and it's finally starting to become more popular.

    I've open sourced my own C template library here:

    https://github.com/ludocode/pottery

    Not only does it use the #include style of templates, but it actually makes the templates composable. It takes this idea pretty far, for example having a lifecycle template that lets you define operations on your type like move, copy, destroy, etc. This way the containers can fully manage the lifecycles of your types even if they're not bitwise movable.

    There's also this other more popular C template library, one that tries to more directly port C++ templates to C but with a lot less features:

    https://github.com/glouw/ctl/

  • Beating Up on Qsort (2019)
    2 projects | news.ycombinator.com | 14 Jan 2021
    This article doesn't really make it clear but the merge sort discussion is specifically about glibc's implementation of qsort(). glibc's qsort() and Wine's qsort() are the only ones I know of that use merge sort to implement qsort(). Most implementations use quick sort.

    I recently did my own benchmarking on various qsort()s since I was trying to implement a faster one. The various BSDs and macOS qsort() are all faster than glibc at sorting integers and they don't allocate memory:

    https://github.com/ludocode/pottery/tree/master/examples/pot...

    Of course sorting is much faster if you can inline the comparator so a templated sort algorithm is always going to be faster than a function that takes a function pointer. But this does not require C++; it can be done in plain C. The templated intro_sort from Pottery (linked above) is competitive with std::sort, as are the excellent swensort/sort templates:

    https://github.com/swenson/sort

What are some alternatives?

When comparing Klib and pottery you can also consider the following projects:

stb - stb single-file public domain libraries for C/C++

mpack - MPack - A C encoder/decoder for the MessagePack serialization format / msgpack.org[C]

Better String - The Better String Library

pdqsort - Pattern-defeating quicksort.

Better Enums - C++ compile-time enum to string, iteration, in a single header file

mavis - opinionated typing library for elixir

ZXing - ZXing ("Zebra Crossing") barcode scanning library for Java, Android

sc - Common libraries and data structures for C.

ZLib - A massively spiffy yet delicately unobtrusive compression library.

ctl - My variant of the C Template Library

HTTP Parser - http request/response parser for c

libc - Raw bindings to platform APIs for Rust