Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
tokio
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
-
swift-corelibs-libdispatch
The libdispatch Project, (a.k.a. Grand Central Dispatch), for concurrency on multicore hardware
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
https://github.com/samsquire/ideas5/blob/main/NonblockingRun...
The design is that we have three groupings of thread types. The application starts up some application threads which are not associated with a request, these service multiconsumer multiproducer thread safe ringbuffers in lightweight threads with a Go-erlang-like lightweight process runtime. (My simple lightweight thread runtime is https://github.com/samsquire/preemptible-thread) We also multiplex multiple network clients sockets across a set number of kernel threads which I call control threads. Their responsibility is to dispatch work to a work stealing thread pool ASAP which has its own group of threads. So we pay a thread synchronization cost ONCE per IO which is the dispatch from the control thread to a thread pool thread. (Presumably this is fast, because the thread pool threads are all looping on a submission queue)
We split all IO and CPU tasks into two halves: submit and handle reply. I assume you can use liburing or epoll in the control threads. The same with CPU tasks and use ringbuffers to communicate between threads. We can always serve client's requests because we're never blocked on handling someone else's request. The control thread is always unblocked.
I think this article is good regarding Python's asyncio story:
https://github.com/samsquire/ideas5/blob/main/NonblockingRun...
The design is that we have three groupings of thread types. The application starts up some application threads which are not associated with a request, these service multiconsumer multiproducer thread safe ringbuffers in lightweight threads with a Go-erlang-like lightweight process runtime. (My simple lightweight thread runtime is https://github.com/samsquire/preemptible-thread) We also multiplex multiple network clients sockets across a set number of kernel threads which I call control threads. Their responsibility is to dispatch work to a work stealing thread pool ASAP which has its own group of threads. So we pay a thread synchronization cost ONCE per IO which is the dispatch from the control thread to a thread pool thread. (Presumably this is fast, because the thread pool threads are all looping on a submission queue)
We split all IO and CPU tasks into two halves: submit and handle reply. I assume you can use liburing or epoll in the control threads. The same with CPU tasks and use ringbuffers to communicate between threads. We can always serve client's requests because we're never blocked on handling someone else's request. The control thread is always unblocked.
I think this article is good regarding Python's asyncio story:
what about "green threads" that is not managed by the OS like https://tokio.rs ?
GCD/libdispatch is a fantastic approach to concurrency and you can build and install support for non-Apple operating systems:
https://github.com/apple/swift-corelibs-libdispatch
Here’s a simple echo server:
https://github.com/williamcotton/c_playground/blob/master/sr...
Here’s a simple multithreaded database pool:
https://github.com/williamcotton/express-c/blob/master/src/d...
GCD/libdispatch is a fantastic approach to concurrency and you can build and install support for non-Apple operating systems:
https://github.com/apple/swift-corelibs-libdispatch
Here’s a simple echo server:
https://github.com/williamcotton/c_playground/blob/master/sr...
Here’s a simple multithreaded database pool:
https://github.com/williamcotton/express-c/blob/master/src/d...
GCD/libdispatch is a fantastic approach to concurrency and you can build and install support for non-Apple operating systems:
https://github.com/apple/swift-corelibs-libdispatch
Here’s a simple echo server:
https://github.com/williamcotton/c_playground/blob/master/sr...
Here’s a simple multithreaded database pool:
https://github.com/williamcotton/express-c/blob/master/src/d...
Functional programming can be a great way to handle parallel programming in a sane way. See the Futhark language [1], for example, that accepts high-level constructs like map and convert them to the appropriate machine code, either on the CPU or the GPU.
[1] https://futhark-lang.org/
And in C++ can also use this dead-simple header file for a nice high-level, modern threadpool using function objects (lambdas) for very easy parallelization of arbitrary tasks: https://github.com/progschj/ThreadPool
https://github.com/Hopac/Hopac is such an impressive piece of software. Too bad it never really took off like it deserved but with more popular competition like rx or just tasks/async (which is enough for most stuff) pretty unavoidable.
Related posts
- How should I structure a medium sized crate?
- Stabilizing async fn in traits in 2023 | Inside Rust Blog
- As part of the stdlib mutex overhaul, std::sync::Mutex on Linux now has competitive performance with parking_lot
- Security vulnerability in Rust standard library
- Microsoft Rust intro says "Rust is known to leak memory"