jemalloc
Cap'n Proto
jemalloc | Cap'n Proto | |
---|---|---|
34 | 66 | |
9,046 | 11,180 | |
0.8% | 0.8% | |
8.3 | 9.2 | |
15 days ago | 7 days ago | |
C | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jemalloc
-
Show HN: Comprehensive inter-process communication (IPC) toolkit in modern C++
- Split-up a certain important C++ service into several parts, for various reasons, without adding latency to the request path.
The latter task meant, among other things, communicating large amounts of user data from server application to server application. capnp-encoded structures (sometimes big - but not necessarily) would also need to be transmitted; as would FDs.
The technical answers to these challenges are not necessarily rocket science. FDs can be transmitted via Unix domain socket as "ancillary data"; the POSIX `sendmsg()` API is hairy but usable. Small messages can be transmitted via Unix domain socket, or pipe, or POSIX MQ (etc.). Large blobs of data it would not be okay to transmit via those transports, as too much copying into and out of kernel buffers is involved and would add major latency, so we'd have to use shared memory (SHM). Certainly a hairy technology... but again, doable. And as for capnp - well - you "just" code a `MessageBuilder` implementation that allocates segments in SHM instead of regular heap like `capnp::MallocMessageBuilder` does.
Thing is, I noticed that various parts of the company had similar needs. I've observed some variation of each of the aforementioned tasks custom-implemented - again, and again, and again. None of these implementations could really be reused anywhere else. Most of them ran into the same problems - none of which is that big a deal on its own, but together (and across projects) it more than adds up. To coders it's annoying. And to the business, it's expensive!
Plus, at least one thing actually proved to be technically quite hard. Sharing (via SHM) a native C++ structure involving STL containers and/or raw pointers: downright tough to achieve in a general way. At least with Boost.interprocess (https://www.boost.org/doc/libs/1_84_0/doc/html/interprocess....) - which is really quite thoughtful - one can accomplish a lot... but even then, there are key limitations, in terms of safety and ease of use/reusability. (I'm being a bit vague here... trying to keep the length under control.)
So, I decided to not just design/code an "IPC thing" for that original key C++ service I was being asked to split... but rather one that could be used as a general toolkit, for any C++ applications. Originally we named it Akamai-IPC, then renamed it Flow-IPC.
As a result of that origin story, Flow-IPC is... hmmm... meat-and-potatoes, pragmatic. It is not a "framework." It does not replace or compete with gRPC. (It can, instead, speed RPC frameworks up by providing the zero-copy transmission substrate.) I hope that it is neither niche nor high-maintenance.
To wit: If you merely want to send some binary-blob messages and/or FDs, it'll do that - and make it easier by letting you set-up a single session between the 2 processes, instead of making you worry about socket names and cleanup. (But, that's optional! If you simply want to set up a Unix domain socket yourself, you can.) If you want to add structured messaging, it supports Cap'n Proto - as noted - and right out of the box it'll be zero-copy end-to-end. That is, it'll do all the SHM stuff without a single `shm_open()` or `mmap()` or `ftruncate()` on your part. And if you want to customize how that all works, those layers and concepts are formally available to you. (No need to modify Flow-IPC yourself: just implement certain concepts and plug them in, at compile-time.)
Lastly, for those who want to work with native C++ data directly in SHM, it'll simplify setup/cleanup considerably compared to what's typical. For the original Akamai service in question, we needed to use SHM as intensively as one typically uses the regular heap. So in particular Boost.interprocess's built-in 2 SHM-allocation algorithms were not sufficient. We needed something more industrial-strength. So we adapted jemalloc (https://jemalloc.net/) to work in SHM, and worked that into Flow-IPC as a standard available feature. (jemalloc powers FreeBSD and big parts of Meta.) So jemalloc's anti-fragmentation algorithms, thread caching - all that stuff - will work for our SHM allocations.
Having accepted this basic plan - develop a reusable IPC library that handled the above oft-repeated needs - Eddy Chan joined and especially heavily contributed on the jemalloc aspects. A couple years later we had it ready for internal Akamai use. All throughout we kept it general - not Akamai-specific (and certainly not specific to that original C++ service that started it all off) - and personally I felt it was a very natural candidate for open-source.
To my delight, once I announced it internally, the immediate reaction from higher-up was, "you should open-source it." Not only that, we were given the resources and goodwill to actually do it. I have learned that it's not easy to make something like this presentable publicly, even having developed it with that in mind. (BTW it is about 69k lines of code, 92k lines of comments, excluding the Manual.)
So, that's what happened. We wrote a thing useful for various teams internally at Akamai - and then Akamai decided we should share it with the world. That's how open-source thrives, we figured.
On a personal level, of course it would be gratifying if others found it useful and/or themselves contributed. What a cool feeling that would be! After working with exemplary open-source stuff like capnp, it'd be amazing to offer even a fraction of that usefulness. But, we don't gain from "market share." It really is just there to be useful. So we hope it is!
-
Finding memory leaks in Postgres C code
jemalloc as well has some handy leak / memory profiling abilities: https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Heap-P...
-
Speed of Rust vs. C
The worst memory performance bug I ever saw turned out to be heap fragmentation in a non-GC system. There are memory allocators that solve this like https://github.com/jemalloc/jemalloc/tree/dev but ... they do it by effectively running a GC at the block level
As soon as you use atomic counters in a multi-threaded system you can wave goodbye to your scalability too!
-
Understanding Mesh Allocator
The linked talk video mentioned they're playing with it in jemalloc and tcmalloc.
I found this https://github.com/jemalloc/jemalloc/issues/1440 but couldn't find tcmalloc doing similar.
These guys are aware of mesh and compare against it: https://abelay.github.io/6828seminar/papers/maas:llama.pdf
-
Atomics and Concurrency
I think that the point rather was not to use any allocation in critical sections since allocator implementations are not lock-free or wait-free.
https://github.com/jemalloc/jemalloc/blob/dev/src/mutex.c
-
Rust std:fs slower than Python
Be aware `jemalloc` will make you suffer the observability issues of `MADV_FREE`. `htop` will no longer show the truth about how much memory is in use.
* https://github.com/jemalloc/jemalloc/issues/387#issuecomment...
* https://gitlab.haskell.org/ghc/ghc/-/issues/17411
Apparently now `jemalloc` will call `MADV_DONTNEED` 10 seconds after `MADV_FREE`:
-
How does the OS know how much virtual memory is needed?
jemalloc (the default FreeBSD malloc, also used by Rust) http://jemalloc.net/
-
The Overflowing Timeout Error - A Debugging Journey in Memgraph!
Of course, we are not working on one feature at a time, we're doing things in parallel. While working on the timers, we introduced jemalloc into our codebase. After merging the jemalloc changes, tests for the timers started to fail. And what kind of failure? Segmentation faults, of course, what else...
-
Google's OSS-Fuzz expands fuzz-reward program to $30000
https://github.com/jemalloc/jemalloc/issues/2222
Strangely, these bugs were found by the CI of ClickHouse, and not by any of the hundreds of other products using these libraries.
-
My app stop working
2- WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Cap'n Proto
-
Mysterious Moving Pointers
Yeah I pretty much only use my own alternate container implementations (from KJ[0]), which avoid these footguns, but the result is everyone complains our project is written in Kenton-Language rather than C++ and there's no Stack Overflow for it and we can't hire engineers who know how to write it... oops.
[0] https://github.com/capnproto/capnproto/blob/v2/kjdoc/tour.md
-
Show HN: Comprehensive inter-process communication (IPC) toolkit in modern C++
- may massively reduce the latency involved.
Those sharing Cap'n Proto-encoded data may have particular interest. Cap'n Proto (https://capnproto.org) is fantastic at its core task - in-place serialization with zero-copy - and we wanted to make the IPC (inter-process communication) involving capnp-serialized messages be zero-copy, end-to-end.
That said, we paid equal attention to other varieties of payload; it's not limited to capnp-encoded messages. For example there is painless (<-- I hope!) zero-copy transmission of arbitrary combinations of STL-compliant native C++ data structures.
To help determine whether Flow-IPC is relevant to you we wrote an intro blog post. It works through an example, summarizes the available features, and has some performance results. https://www.linode.com/blog/open-source/flow-ipc-introductio...
Of course there's nothing wrong with going straight to the GitHub link and getting into the README and docs.
Currently Flow-IPC is for Linux. (macOS/ARM64 and Windows support could follow soon, depending on demand/contributions.)
-
Condvars and atomics do not mix
FWIW, my C++ toolkit library, KJ, does the same thing.[0]
But presumably you could still write a condition predicate which looks at things which aren't actually part of the mutex-wrapped structure? Or does is the Rust type system able to enforce that the callback can only consider the mutex-wrapped value and values that are constant over the lifetime of the wait? (You need the latter e.g. if you are waiting for the mutex-wrapped value to compare equal to some local variable...)
[0] https://github.com/capnproto/capnproto/blob/e6ad6f919aeb381b...
- Cap'n'Proto: infinitely faster than Protobuf
-
I don’t understand zero copy
The second one is to encode data in such a way that you can read it and operate on it directly from the buffer. You write data in a layout that is the same, or easily transformed as types in memory. To do that you usually need to encode with a known schema, only Sized types to efficiently compute fields locations as offsets in the buffer, and you usually represent pointers as offset into the encode. You can look at capnproto protocol for instance https://capnproto.org/
-
OpenTF Renames Itself to OpenTofu
Worked well for Cap'n Proto (the cerealization protocol)! https://capnproto.org/
-
A Critique of the Cap'n Proto Schema Language
With all due respect, you read completely wrong.
* The very first use case for which Cap'n Proto was designed was to be the protocol that Sandstorm.io used to talk between sandbox and supervisor -- an explicitly adversarial security scenario.
* The documentation explicitly calls out how implementations should manage resource exhaustion problems like deep recursion depth (stack overflow risk).
* The implementation has been fuzz-tested multiple ways, including as part of Google's oss-fuzz.
* When there are security bugs, I issue advisories like this:
https://github.com/capnproto/capnproto/tree/v2/security-advi...
* The primary aim of the entire project is to be a Capability-Based Security RPC protocol.
- Cap'n Proto: serialization/RPC system – core tools and C++ library
-
Sandstorm: Open-source platform for self-hosting web app
I like how they use capability-based security [0] and use Cap'n Proto protocol. This is another technology that is slow to get broad adoption, but has many things going for when compared to e.g. Protocol Buffers (Cap'n Proto is created by the primary author of Protobuf v2, Kenton Varda).
[0] https://sandstorm.io/how-it-works#capabilities
[1] https://capnproto.org
-
Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking
Related but not Rust-specific: FlatBuffers, Cap'n Proto.
What are some alternatives?
mimalloc - mimalloc is a compact general purpose allocator with excellent performance.
gRPC - The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
tbb - oneAPI Threading Building Blocks (oneTBB) [Moved to: https://github.com/oneapi-src/oneTBB]
Protobuf - Protocol Buffers - Google's data interchange format
rust-scudo
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
rpmalloc - Public domain cross platform lock free thread caching 16-byte aligned memory allocator implemented in C
ZeroMQ - ZeroMQ core engine in C++, implements ZMTP/3.1
Hoard - The Hoard Memory Allocator: A Fast, Scalable, and Memory-efficient Malloc for Linux, Windows, and Mac.
Apache Thrift - Apache Thrift
gperftools - Main gperftools repository
MessagePack - MessagePack serializer implementation for Java / msgpack.org[Java]