triple-buffer
cats-effect
triple-buffer | cats-effect | |
---|---|---|
4 | 34 | |
79 | 1,962 | |
- | 1.2% | |
6.3 | 9.7 | |
2 months ago | 2 days ago | |
Rust | Scala | |
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triple-buffer
-
A lock-free single element generic queue
Great write up! I believe the colloquial name for this algorithm is a "lock-free triple buffer". Here's an implementation in Rust (I couldn't find any c/c++ examples) that has extremely thorough comments that might help completely wrap your head around the synchronization ordering. Rust uses the same semantics for atomic primitives as C11, so it should be pretty easy to match up with your implementation. I came to the same conclusion as you to solve an issue I had with passing arbitrarily large data between two threads in an RTOS system I was working with at my day job. It was an extremely satisfying moment, realizing the index variable was sufficient to communicate all the needed information between the two threads.
-
Rust Is Hard, Or: The Misery of Mainstream Programming
Rust marks cross-thread shared memory as immutable in the general case, and allows you to define your own shared mutability constructs out of primitives like mutexes, atomics, and UnsafeCell. As a result you don't get rope to hang yourself with by default, but atomic orderings are more than enough rope to devise incorrect synchronizations (especially with more than 2 threads or memory locations). To quote an earlier post of mine:
In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).
My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).
I find that writing C++ code the Rust way eliminates data races practically as effectively as writing Rust code upfront, but C++ makes the Rust way of thread-safe code extra work (no Mutex unless you make one yourself, and you have to simulate &(T: Sync) yourself using T const* coupled with mutable atomic/mutex fields), whereas the happy path of threaded C++ (raw non-Arc pointers to shared mutable memory) leads to pervasive data races caused by missing or incorrect mutex locking or atomic synchronization.
-
Notes on Concurrency Bugs
In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).
My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).
I suppose &/&mut is also a safeguard against event-loop and reentrancy bugs (like https://github.com/quotient-im/Quaternion/issues/702). I don't think Rust solves the general problem of preventing deadlocks within and between processes (which often cross organizational boundaries between projects and distinct codebases, with no clear contract on allowed behavior and which party in a deadlock is at fault), and non-atomicity between processes on a single machine (see my PipeWire criticism at https://news.ycombinator.com/item?id=31519951). File saving is also difficult (https://danluu.com/file-consistency/), though I find that fsync-then-rename works well enough if you don't need to preserve metadata or write through file (not folder) symlinks.
- A bug that doesn’t exist on x86: Exploiting an ARM-only race condition
cats-effect
-
A question about Http4s new major version
Those benchmarks are using a snapshot version of cats-effect. I don't know where that one comes from, but previously they were using a snapshot from https://github.com/typelevel/cats-effect/pull/3332 which had some issues (3.5-6581dc4, 70% performance degradation), which have since been resolved (see that PR for more info and comparative benchmarks).
-
The Great Concurrency Smackdown: ZIO versus JDK by John A. De Goes
Recently, CE3 has had similar issues reported across multiple repositories, almost an epidemic of reports!
-
40x Faster! We rewrote our project with Rust!
The one advantage Rust has over Scala is that it detects data races at compile time, and that's a big time saver if you use low level thread synchronization. However, if you write pure FP code with ZIO or Cats Effect that's basically a non-issue anyway.
-
Sequential application of a constructor?
See also cats-effect and fs2. cats-effect gives you your IO Monad (and IOApp to run it with on supported platforms). fs2 is the ecosystem’s streaming library, which is much more pervasive in functional Scala than in Haskell. For example, http4s and Doobie are both based on fs2.
-
Should I Move From PHP to Node/Express?
On the contrary, switching to the functional mindset, with something like Typelevel Scala3 and respective cats and cats-effect fs2 frameworks, helps to rethink a lot of designs and development approaches.
-
Next Steps for Rust in the Kernel
I think "better Haskell on JVM" (in contrast to "worse Haskell") is a good identity for Scala to have. (Please note that this is an intentional hyperbole.)
Of course, there are areas where Haskell is stronger than Scala (hint: modularity, crucial for good Software Engineering, is not one of them). And Scala has its own way of doing things, so just imitating Haskell won't work well.
Examples of this "better Haskell" are https://typelevel.org/cats-effect/ and https://zio.dev/ .
All together, Scala may be a better choice for you if you want to do Pure Functional Programming. And is definitely less risky (runs on JVM, Java libraries interop, IntelliJ, easy debugging, etc...).
None of the other languages you mentioned are viable in this sense (if also you want a powerful type system, which rules out Clojure).
I agree that Rust's identity is pretty clear: a modern language for use cases where only C or C++ could have been used before.
-
Java 19 Is Out
I would use Scala. I like FP and Scala comes with some awesome libraries for concurrent/async programming like Cats Effect or ZIO. Good choice for creating modern style micro-services to be run in the cloud (or even macro-services, Scala has a powerful module system, so it's made to handle large codebases).
https://typelevel.org/cats-effect/
https://zio.dev/
The language, the community and customs are great. You don't have to worry about nulls, things are immutable by default, domain modelling with ADTs and patter matching is pure joy.
The tooling available is from good to great and Scala is big enough that there are good libraries for typical if not vast majority of stuff and Java libs as a reliable fallback.
-
Typelevel Native
What took my interest is this (for both JVM and future multithreaded Scala native): https://github.com/typelevel/cats-effect/discussions/3070 Having the same threads poll available IO events and execute callbacks should improve performance greatly
-
Scala isn't fun anymore
The author is the creator of Monix and implemented the first version of cats-effect. He knows what he is doing.
-
Question about some advanced types
You want Kernmantle, which quite honestly shouldn't be hard to implement around Cats and cats-effect. In particular, although Kernmantle doesn't require the use of the Arrow typeclass, there happen to be Arrow (actually ArrowChoice) instances for both Function1 from the standard library and Kleisli from Cats itself, given a Monad instance for the Kleilsi's F[_] type parameter. In other words, we should be able to port Kernmantle from Haskell to Scala (with the Typelevel ecosystem) and instantly be able to use pretty much anything else from the Typelevel ecosystem, or wrapped with it, in our workflow graphs. Pure functions, monadic functions, applicative functions, GADTs with hand-written interpreters, any of it. I think this would be eminently worth doing.
What are some alternatives?
bbqueue - A SPSC, lockless, no_std, thread safe, queue, based on BipBuffers
ZIO - ZIO — A type-safe, composable library for async and concurrent programming in Scala
left-right - A lock-free, read-optimized, concurrency primitive.
FS2 - Compositional, streaming I/O library for Scala
Ionide-vim - F# Vim plugin based on FsAutoComplete and LSP protocol
fs2-grpc - gRPC implementation for FS2/cats-effect
scrap - 📸 Screen capture made easy!
doobie-quill - Integration between Doobie and Quill libraries
jakt - The Jakt Programming Language
Kategory - Λrrow - Functional companion to Kotlin's Standard Library
mun - Source code for the Mun language and runtime.
Slick - Slick (Scala Language Integrated Connection Kit) is a modern database query and access library for Scala