Cello
CC
Our great sponsors
Cello | CC | |
---|---|---|
18 | 21 | |
6,224 | 91 | |
- | - | |
0.0 | 5.1 | |
7 months ago | 10 days ago | |
C | C | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Cello
- The NSA list of memory-safe programming languages has been updated
-
Object-oriented Programming with ANSI-C [pdf]
Yes, that's C. C macros can take you quite far. Unfortunately because it's just a bunch of macros, it's quite brittle. Like high level abstractions created with macros in assembly language. You have to do all the checking and reasoning about it since the compiler cannot.
-
Better C Generics: The Extendible _Generic
It took me a long time to understand, coming from higher level programming, that a lot of exactly that "higher level" is just systematic fat pointer conventions. And because pointers-with-metadata is not a first-class language construct, we invent all these languages that codify a particular fat pointer convention. Cello is an example of what kinds of abstractions can be built on top of a tiny little bit of (non-native) fat pointer convention.
-
OOP in C
There is a lightweight object oriented extension to C called Objective-C [1] that unfortunately never gained much traction outside the NeXT/Apple ecosystem. There is also Cello [2].
-
Ask HN: Modern C Libraries
Regular expressions library to validate information before dumping to rocksdb.
https://www.gnu.org/software/libc/manual/html_node/Regular-E...
Non-critical implimentation fun, use cello [1] for 'gawk' functionality in C with C++ objects/classes.
- What does the ??!??! operator do in C?
-
Is it possible to make C as safe as Rust?
You can achieve a fairly decent runtime safety for some types of project. Check out libcello and my own monster (libstent, lame presentation).
-
Ask HN: I like studying the concept of abstractions
towards lisp related data structures / algorithms (aka recursive tree data structures & algorithms).
So, no distinction between metadata vs. structual storage unless noted.
Anything beyond that tends towards masters & upper level undergraduate level material. aka review the implimentation of a programming language for algorithm & data structure usage per language features.
aka Autonoma / regular expressions backround: Lisp in Small Pieces by Christian Queinnec; ; https://github.com/aalhour/awesome-compilers; On Lisp by Paul Graham; Let over Lambda by Doug Hoyte; C 'macro's pushed to maximum effect : https://libcello.org/
Left out Comparison of languages; Transform from lang a to lang b; and language implimentation as discussions tend to assume masters / upper level undergraduate knowledge
-
Cake: C23 Front End and Transpiler C23 – C99
with skills like this, mind to push cello forward? https://github.com/orangeduck/Cello really like it but not skillful enough to do it myself.
-
Why can't all programming languages be supersets of C?
But we could create a superset of C that has different properties. This is the core idea of DSLs ... See e.g. https://libcello.org/
CC
-
preprocessor stuff - compile time pasting into other files
With extendible macros, you could achieve the following:
-
Factor is faster than Zig
In my example the table stores the hash codes themselves instead of the keys (because the hash function is invertible)
Oh, I see, right. If determining the home bucket is trivial, then the back-shifting method is great. The issue is just that it’s not as much of a general-purpose solution as it may initially seem.
“With a different algorithm (Robin Hood or bidirectional linear probing), the load factor can be kept well over 90% with good performance, as the benchmarks in the same repo demonstrate.”
I’ve seen the 90% claim made several times in literature on Robin Hood hash tables. In my experience, the claim is a bit exaggerated, although I suppose it depends on what our idea of “good performance” is. See these benchmarks, which again go up to a maximum load factor of 0.95 (Although boost and Absl forcibly grow/rehash at 0.85-0.9):
https://strong-starlight-4ea0ed.netlify.app/
Tsl, Martinus, and CC are all Robin Hood tables (https://github.com/Tessil/robin-map, https://github.com/martinus/robin-hood-hashing, and https://github.com/JacksonAllan/CC, respectively). Absl and Boost are the well-known SIMD-based hash tables. Khash (https://github.com/attractivechaos/klib/blob/master/khash.h) is, I think, an ordinary open-addressing table using quadratic probing. Fastmap is a new, yet-to-be-published design that is fundamentally similar to bytell (https://www.youtube.com/watch?v=M2fKMP47slQ) but also incorporates some aspects of the aforementioned SIMD maps (it caches a 4-bit fragment of the hash code to avoid most key comparisons).
As you can see, all the Robin Hood maps spike upwards dramatically as the load factor gets high, becoming as much as 5-6 times slower at 0.95 vs 0.5 in one of the benchmarks (uint64_t key, 256-bit struct value: Total time to erase 1000 existing elements with N elements in map). Only the SIMD maps (with Boost being the better performer) and Fastmap appear mostly immune to load factor in all benchmarks, although the SIMD maps do - I believe - use tombstones for deletion.
I’ve only read briefly about bi-directional linear probing – never experimented with it.
-
If this isn't the perfect data structure, why?
From your other comments, it seems like your knowledge of hash tables might be limited to closed-addressing/separate-chaining hash tables. The current frontrunners in high-performance, memory-efficient hash table design all use some form of open addressing, largely to avoid pointer chasing and limit cache misses. In this regard, you want to check our SSE-powered hash tables (such as Abseil, Boost, and Folly/F14), Robin Hood hash tables (such as Martinus and Tessil), or Skarupke (I've recently had a lot of success with a similar design that I will publish here soon and is destined to replace my own Robin Hood hash tables). Also check out existing research/benchmarks here and here. But we a little bit wary of any benchmarks you look at or perform because there are a lot of factors that influence the result (e.g. benchmarking hash tables at a maximum load factor of 0.5 will produce wildly different result to benchmarking them at a load factor of 0.95, just as benchmarking them with integer keys-value pairs will produce different results to benchmarking them with 256-byte key-value pairs). And you need to familiarize yourself with open addressing and different probing strategies (e.g. linear, quadratic) first.
-
[Noob Question] How do C programmers get around not having hash maps?
CC (Full disclosure: I authored this one)
-
New C features in GCC 13
If you're using C23 or have typeof (so GCC or Clang), then yet another approach is to define a type that aliases the specified type if it is unique or otherwise becomes a "dummy" type. Here's what that looks like in CC:
I'm struggling to understand exactly what you mean here, but check out this article and see whether it's relevant, if you didn't already see it when I posted it last year.
-
Convenient Containers v1.0.3: Better compile speed, faster maps and sets
I’d like to share version 1.0.3 of Convenient Containers (CC), my generic container library. The library was previously discussed here and here. As explained elsewhere,
-
Popular Data Structure Libraries in C ?
Convenient Containers (CC) - I'm the author of this one.
-
So what's the best data structures and algorithms library for C?
"Using macros" is a broad description that covers multiple paradigms. There are libraries that use macros in combination with typed pointers and functions that take void* parameters to provide some degree of API genericity and type safety at the same time (e.g. stb_ds and, as you mentioned, my own CC). There are libraries that use macros (or #include directives) to manually instantiate templates (e.g. STC, M*LIB, and Pottery). And then there are libraries that are implemented entirely or almost entirely as macros (e.g. uthash).
What are some alternatives?
rust-bindgen - Automatically generates Rust FFI bindings to C (and some C++) libraries.
glibc - GNU Libc
cfront-3 - self education and historical research of the C++ compiler cfront v3
vos - Vinix is an effort to write a modern, fast, and useful operating system in the V programming language
mlib - Library of generic and type safe containers in pure C language (C99 or C11) for a wide collection of container (comparable to the C++ STL).
metaparse - A library for generating compile time parsers parsing embedded DSL code as part of the C++ compilation process
infer - A static analyzer for Java, C, C++, and Objective-C
v-mode - 🌻 An Emacs major mode for the V programming language.
rust - Empowering everyone to build reliable and efficient software.
SDS - Simple Dynamic Strings library for C
libGimbal - C17-based extended standard library, cross-language type system, and unit testing framework targeting Sega Dreamcast, Sony PSP and PSVita, Windows, Mac, Linux, Android, iOS, and WebAssembly.
CompCert - The CompCert formally-verified C compiler