tokio-uring
liburing
tokio-uring | liburing | |
---|---|---|
32 | 32 | |
1,182 | 2,971 | |
2.5% | 2.5% | |
4.4 | 9.8 | |
6 months ago | 8 days ago | |
Rust | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tokio-uring
-
The design of Tokio-uring: Linux io_uring support for Rust
Link should be: https://github.com/tokio-rs/tokio-uring/blob/master/DESIGN.m...
- QUIC Is Not Quick Enough over Fast Internet
-
Gazette: Cloud-native millisecond-latency streaming
I feel a bit paralyzed by Fear Of Missing Io_Uring. There's so much awesome streaming stuff about (RisingWave, Materialize, NATS, DataFusion, Velox, many more), but it all feels built on slower legacy system libraries.
It's not heavily used yet, but Rust has a bunch of fairly high visibility efforts. Situation sort of feels similar with http3, where the problem is figuring out what to pick. https://github.com/tokio-rs/tokio-uring https://github.com/bytedance/monoio https://github.com/DataDog/glommio
- tokio_fs crate
-
Use io_uring for network I/O
While Mio will probably not implement uring in its current design, there's https://github.com/tokio-rs/tokio-uring if you want to use io_uring in Rust.
It's still in development, but the Tokio team seems intent on getting good io_uring support at least!
As the README states, the Rust implementation requires a kernel newer than the one that shipped with Ubuntu 20.04 so I think it'll be a while before we'll see significant development among major libraries.
-
Create a data structure for low latency memory management
That's what the pool is for: https://github.com/tokio-rs/tokio-uring/blob/master/src/buf/fixed/pool.rs
-
Cloudflare Ditches Nginx for In-House, Rust-Written Pingora
Tokio supports io_uring (https://github.com/tokio-rs/tokio-uring), so perhaps when it's mature and battle-tested, it'd be easier to transition to it if Cloudflare aren't using it already.
-
Anyone using io_uring?
- Tokio suffers from a similar problem
-
redb 0.4.0: 2x faster commits with 1PC+C instead of 2PC
Eg via tokio-uring.
-
Efficient way to read multiple files in parallel
I strongly recommend you to look into io-uring and use async executors that take advantages of it: - tokio-uring (not recommended as it is still undergoing development) - monoio - glommio
liburing
- What's new with io_uring in 6.11 and 6.12
-
Nanolog supports logging with 7 ns median latency
This would work in this specific case where we know that there is a maximum rate at which work is produced. Arguably I was hijacking the thread to discuss about a more general problem that I've been thinking about for a while. I have the sense that a ring-buffer that has a wait-free push with some tight bound on latency that doesn't require fixed interval polling on the consumer would be a nice primitive that I certainly could have used at times.
And in fact ... the wait-free wakeup part of this is already there. Now that io_uring has futex support, a producer can enable kernel-side busy polling on the uring, and then submit a FUTEX_WAKE to the ring without doing any sys calls. This Github issue [1] has a nice description.
[1] https://github.com/axboe/liburing/issues/385
- What's New with Io_uring in 6.10
- Liburing 2.6 Released
-
Io Uring
I've tinkered around with io_uring on and off for the last couple years. But I think it's really becoming quite cool (not that it wasn't cool before... :)). This was a really interesting post on what's new https://github.com/axboe/liburing/wiki/io_uring-and-networki.... The combination of ring-mapped buffers and multi-shot operations has some really interesting applications for high-performance networking. Hoping over the next year or two we can start to see really bleeding edge networking perf without having to resort to using DPDK :)
-
Why you should use io_uring for network I/O
Thought I was doing something wrong at first, but after looking at examples and code, I just wasn't able to reach the epoll numbers. Looking on the Github page, there a few issues there with people who found the same thing, with their own examples. #1, #2
-
Use io_uring for network I/O
To address my own silly questions, yes, one should use the new fixed buffers described in this document: https://github.com/axboe/liburing/wiki/io_uring-and-networki...
-
The fastest rm command and one of the fastest cp commands
We're working on this! https://github.com/axboe/liburing/issues/830
- axboe / liburing
What are some alternatives?
glommio - Glommio is a thread-per-core crate that makes writing highly parallel asynchronous applications in a thread-per-core architecture easier for rustaceans.
libevent - Event notification library
libuv - Cross-platform asynchronous I/O
monoio - Rust async runtime based on io-uring.
io_uring-echo-server - io_uring echo server
rocket_auth - An implementation for an authentication API for Rocket applications.
eRPC - Efficient RPCs for datacenter networks
diesel_async - Diesel async connection implementation
picohttpparser - tiny HTTP parser written in C (used in HTTP::Parser::XS et al.)
rust-analyzer - A Rust compiler front-end for IDEs [Moved to: https://github.com/rust-lang/rust-analyzer]
openonload - git import of openonload.org https://gist.github.com/majek/ae188ae72e63470652c9