Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
https://github.com/gaujay/lfjson Your mandatory bi-weekly JSON news is here! JK, but let's cut to the chase: this is actually not a new JSON serializer. I wanted to explore some allocator-related topics and noticed (almost) all JSON libraries put emphasis on speed, not memory usage (like in this article).
But what's the catch, you're thinking ? Well, it is a bit slower than its counterparts when it comes to deserializing (and marginally faster for serializing). To achieve smaller footprint, it uses a few tricks and notably a custom hash table to deduplicate strings. This comes at a cost of course (even when featuring xxHash to speed things up), but keeps the slowdown reasonable (I think).
Interesting research. I've been working on a library that reads directly into C++ memory, Glaze, so the JSON library has practically zero heap overhead. You just have the memory cost of your buffer and the C++ that you plan to populate. For example, if you want to read into a `std::map`, then Glaze will pretty much be as efficient in memory cost as the C++ type can store its memory.
I think nlohmann-json tried the 'shrink' route but got pretty bad perf penalty for it. For my lib I used the same strategy as RapidJSON and deserialized data in a temp buffer, before mem-copying it to its destination (on reaching array/object end). This is actually faster than trying to manage memory holes created by vector growth.