loom VS triple-buffer

Compare loom vs triple-buffer and see what are their differences.

loom

Concurrency permutation testing tool for Rust. (by tokio-rs)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
loom triple-buffer
14 4
1,896 79
3.1% -
6.8 6.3
7 days ago 2 months ago
Rust Rust
MIT License Mozilla Public License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

loom

Posts with mentions or reviews of loom. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-17.
  • Turmoil, a framework for developing and testing distributed systems
    4 projects | news.ycombinator.com | 17 Aug 2023
  • An Introduction to Lockless Algorithms
    3 projects | news.ycombinator.com | 24 Apr 2023
    > Mutexes are very cheap in the uncontended case

    It was a while ago I was deep into this mess so forgive any ignorance–but–iirc the thread-mutex dogma[1] has many pitfalls despite being so widely used. Primarily they’re easy to misuse (deadlocks, holding a lock across a suspend point), and have unpredictable performance because they span so far into compiler, OS and CPU territory (instruction reordering, cache line invalidation, mode switches etc). Also on Arm it’s unclear if mutices are as cheap because of the relaxed memory order(?). Finally code with mutices are hard to test exhaustively, and are prone to heisenbugs.

    Now, many if not most of the above apply to anything with atomics, so lock-free/wait-free won’t help either. There’s a reason why a lot of concurrency is ~phd level on the theoretical side, as well as deeply coupled with the gritty realities of hardware/compilers/os on the engineering side.

    That said, I still think there’s room for a slightly expanded concurrency toolbox for mortals. For instance, a well implemented concurrent queue can be a significant improvement for many workflows, perhaps even with native OS support (io_uring style)?. Another exciting example is concurrency permutation test frameworks[2] for atomics that reorder operations in order to synthetically trigger rare logical race conditions. I’ve also personally had great experience with the Golang race detector. I hope we see some convergence on some of this stuff within a few years. Concurrency is still incredibly hard to get right.

    [1]: I say this only because CS degrees has preached mutices to as the silver bullet for decades.

    [2]: https://github.com/tokio-rs/loom

  • Should atomics be unsafe?
    4 projects | /r/rust | 18 Feb 2023
    Of course atomics are absolutely essential for some of the libraries we take for granted, such as Arc and Tokio. But if you start reading the code and comments and issues and PRs around code like that, you'll see how much work it took to mature them to the point we can now rely on them. That's why tools like Loom exist.
  • Best tool to find deadlocks (in async code)
    2 projects | /r/rust | 22 Sep 2022
    loom and shuttle can help you narrow down the problem.
  • Does Rust not need extra linting and sanitizing tools like C++?
    11 projects | /r/rust | 28 Aug 2022
    Unless you are writing unsafe code, you generally don't need to use sanitizers. If you do write unsafe code, checking it with a sanitizer would be a great idea. Two most useful tools here I think are miri and loom.
  • The Deadlock Empire
    2 projects | news.ycombinator.com | 3 Dec 2021
    https://github.com/tokio-rs/loom perhaps? It also models weak memory reordering, but takes some work to integrate into existing apps.

    For triggering race conditions in compiled binaries, you could try https://robert.ocallahan.org/2016/02/introducing-rr-chaos-mo....

  • What could Go wrong with a mutex? (A Go profiling story)
    2 projects | news.ycombinator.com | 3 Nov 2021
    There is Loom[1] (part of the Tokio project) for exhaustively testing multithreaded code. Though as far as I can tell it is designed for debugging threads, not async tasks.

    [1] https://github.com/tokio-rs/loom

  • Cooptex - Deadlock-free Mutexes
    2 projects | /r/rust | 29 Oct 2021
    That tool seems similar to https://github.com/tokio-rs/loom, insofar as detecting potential locking errors. These are useful during development, but could still miss production cases (as dev never perfectly matches production). This crate is meant to not have to worry about possibly deadlocking.
  • A bug that doesn’t exist on x86: Exploiting an ARM-only race condition
    6 projects | news.ycombinator.com | 25 Oct 2021
    Rust doesn't catch memory ordering errors, which can result in behavioral bugs in safe Rust and data races and memory unsafety in unsafe Rust. But Loom is an excellent tool for catching ordering errors, though its UnsafeCell API differs from std's (and worse yet, some people report Loom returns false positives/negatives in some cases: https://github.com/tokio-rs/loom/issues/180, possibly https://github.com/tokio-rs/loom/issues/166).
  • Multicore OCaml: April 2021
    6 projects | news.ycombinator.com | 13 May 2021

triple-buffer

Posts with mentions or reviews of triple-buffer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-02.
  • A lock-free single element generic queue
    1 project | /r/C_Programming | 24 Mar 2023
    Great write up! I believe the colloquial name for this algorithm is a "lock-free triple buffer". Here's an implementation in Rust (I couldn't find any c/c++ examples) that has extremely thorough comments that might help completely wrap your head around the synchronization ordering. Rust uses the same semantics for atomic primitives as C11, so it should be pretty easy to match up with your implementation. I came to the same conclusion as you to solve an issue I had with passing arbitrarily large data between two threads in an RTOS system I was working with at my day job. It was an extremely satisfying moment, realizing the index variable was sufficient to communicate all the needed information between the two threads.
  • Rust Is Hard, Or: The Misery of Mainstream Programming
    15 projects | news.ycombinator.com | 2 Jun 2022
    Rust marks cross-thread shared memory as immutable in the general case, and allows you to define your own shared mutability constructs out of primitives like mutexes, atomics, and UnsafeCell. As a result you don't get rope to hang yourself with by default, but atomic orderings are more than enough rope to devise incorrect synchronizations (especially with more than 2 threads or memory locations). To quote an earlier post of mine:

    In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).

    My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).

    I find that writing C++ code the Rust way eliminates data races practically as effectively as writing Rust code upfront, but C++ makes the Rust way of thread-safe code extra work (no Mutex unless you make one yourself, and you have to simulate &(T: Sync) yourself using T const* coupled with mutable atomic/mutex fields), whereas the happy path of threaded C++ (raw non-Arc pointers to shared mutable memory) leads to pervasive data races caused by missing or incorrect mutex locking or atomic synchronization.

  • Notes on Concurrency Bugs
    3 projects | news.ycombinator.com | 28 May 2022
    In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).

    My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).

    I suppose &/&mut is also a safeguard against event-loop and reentrancy bugs (like https://github.com/quotient-im/Quaternion/issues/702). I don't think Rust solves the general problem of preventing deadlocks within and between processes (which often cross organizational boundaries between projects and distinct codebases, with no clear contract on allowed behavior and which party in a deadlock is at fault), and non-atomicity between processes on a single machine (see my PipeWire criticism at https://news.ycombinator.com/item?id=31519951). File saving is also difficult (https://danluu.com/file-consistency/), though I find that fsync-then-rename works well enough if you don't need to preserve metadata or write through file (not folder) symlinks.

  • A bug that doesn’t exist on x86: Exploiting an ARM-only race condition
    6 projects | news.ycombinator.com | 25 Oct 2021

What are some alternatives?

When comparing loom and triple-buffer you can also consider the following projects:

eioio - Effects-based direct-style IO for multicore OCaml

bbqueue - A SPSC, lockless, no_std, thread safe, queue, based on BipBuffers

console - a debugger for async rust!

left-right - A lock-free, read-optimized, concurrency primitive.

ocaml-multicore - Multicore OCaml

Ionide-vim - F# Vim plugin based on FsAutoComplete and LSP protocol

shuttle - Shuttle is a library for testing concurrent Rust code

scrap - 📸 Screen capture made easy!

TLAPLUS_DeadlockEmpire - Specs and models for solving the DeadlockEmpire problems using TLA+ and TLC

jakt - The Jakt Programming Language

Rudra - Rust Memory Safety & Undefined Behavior Detection

mun - Source code for the Mun language and runtime.