triple-buffer
how-to-exploit-a-double-free
triple-buffer | how-to-exploit-a-double-free | |
---|---|---|
4 | 13 | |
79 | 1,293 | |
- | - | |
6.3 | 0.0 | |
2 months ago | over 2 years ago | |
Rust | Python | |
Mozilla Public License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triple-buffer
-
A lock-free single element generic queue
Great write up! I believe the colloquial name for this algorithm is a "lock-free triple buffer". Here's an implementation in Rust (I couldn't find any c/c++ examples) that has extremely thorough comments that might help completely wrap your head around the synchronization ordering. Rust uses the same semantics for atomic primitives as C11, so it should be pretty easy to match up with your implementation. I came to the same conclusion as you to solve an issue I had with passing arbitrarily large data between two threads in an RTOS system I was working with at my day job. It was an extremely satisfying moment, realizing the index variable was sufficient to communicate all the needed information between the two threads.
-
Rust Is Hard, Or: The Misery of Mainstream Programming
Rust marks cross-thread shared memory as immutable in the general case, and allows you to define your own shared mutability constructs out of primitives like mutexes, atomics, and UnsafeCell. As a result you don't get rope to hang yourself with by default, but atomic orderings are more than enough rope to devise incorrect synchronizations (especially with more than 2 threads or memory locations). To quote an earlier post of mine:
In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).
My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).
I find that writing C++ code the Rust way eliminates data races practically as effectively as writing Rust code upfront, but C++ makes the Rust way of thread-safe code extra work (no Mutex unless you make one yourself, and you have to simulate &(T: Sync) yourself using T const* coupled with mutable atomic/mutex fields), whereas the happy path of threaded C++ (raw non-Arc pointers to shared mutable memory) leads to pervasive data races caused by missing or incorrect mutex locking or atomic synchronization.
-
Notes on Concurrency Bugs
In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).
My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).
I suppose &/&mut is also a safeguard against event-loop and reentrancy bugs (like https://github.com/quotient-im/Quaternion/issues/702). I don't think Rust solves the general problem of preventing deadlocks within and between processes (which often cross organizational boundaries between projects and distinct codebases, with no clear contract on allowed behavior and which party in a deadlock is at fault), and non-atomicity between processes on a single machine (see my PipeWire criticism at https://news.ycombinator.com/item?id=31519951). File saving is also difficult (https://danluu.com/file-consistency/), though I find that fsync-then-rename works well enough if you don't need to preserve metadata or write through file (not folder) symlinks.
- A bug that doesn’t exist on x86: Exploiting an ARM-only race condition
how-to-exploit-a-double-free
-
US Cybersecurity: The Urgent Need for Memory Safety in Software Products
No. In order to exploit modern memory corruptions, you have to most often send a shitload of data with significant lengths to fill up memory strategically and/or rop gadget jump addresses. None of this looks like real payloads.
https://github.com/stong/how-to-exploit-a-double-free
The analogy to firewalls is that you would specify the exact condition of the input for it to forward to the actual program. For example, if your endpoint receives json, you would validate the json and check each field value for valid range, ie min max number of characters and what those character values could be for each field. Just like a firewall limits who can talk to who in way.
-
A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution
I think what he means with historically is before ASLR, DEP, and other mitigations, where a buffer overflow meant you can simply overwrite the return pointer at ESP, jump to the stack and run any shellcode. Mitigations have made exploitation much, much more complex nowadays. See for example https://github.com/stong/how-to-exploit-a-double-free
- How to exploit a double free vulnerability in 2021
- This bug doesn’t exist on x86: Exploiting an ARM-only race condition
- A bug that doesn’t exist on x86: Exploiting an ARM-only race condition
What are some alternatives?
bbqueue - A SPSC, lockless, no_std, thread safe, queue, based on BipBuffers
left-right - A lock-free, read-optimized, concurrency primitive.
loom - Concurrency permutation testing tool for Rust.
Ionide-vim - F# Vim plugin based on FsAutoComplete and LSP protocol
wuffs - Wrangling Untrusted File Formats Safely
scrap - 📸 Screen capture made easy!
llvm-project - The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. This fork is used to manage Apple’s stable releases of Clang as well as support the Swift project.
jakt - The Jakt Programming Language
linux - Linux kernel source tree
mun - Source code for the Mun language and runtime.