slice_deque

A contiguous-in-memory double-ended queue that derefs into a slice (by gnzlbg)

Slice_deque Alternatives

Similar projects and alternatives to slice_deque

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better slice_deque alternative or higher similarity.

slice_deque reviews and mentions

Posts with mentions or reviews of slice_deque. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-29.
  • A lock-free ring-buffer with contiguous reservations (2019)
    9 projects | news.ycombinator.com | 29 Feb 2024
    > It's not an attack on the wording, but the correctness of your first bullet point. `unsafe` is appropriate for the initialization of a ring buffer in Rust. That's true for using `mmap` or anything in "pure" Rust using the allocator API to get the most idiomatic representation (which can't be done in safe or stable Rust). It's not one line. It's also not platform dependent, the code is the same on MacOS, Linux, and Windows the last I tried it.

    We're not talking about the same thing then.

    I'm talking about this code here: <https://github.com/gnzlbg/slice_deque/tree/master/src/mirror...> It is absolutely platform specific.

    Yes, most ring buffer implementations feature a little bit of `unsafe` code. No, it doesn't make sense to say "I have a tiny amount of `unsafe` already, so adding more has no cost."

    > But if your bottleneck is determined by the frequency at which channels get created or how many exist then I would call architecture into the question. ... This last month I've written a lock-free ring buffer to solve a problem and there's exactly one in an application that spawns millions of concurrent tasks.

    Okay, but a lot of applications or libraries are written to support many connections, and you don't necessarily know when writing the code (or even when your server receives them) if those connections will be just cycled very quickly or will be high-throughput long-lived affairs. Each of those probably has a send buffer and a receive buffer. So while it might make sense for your application to have a single ring buffer for its life, applications which churn through them heavily are completely valid.

  • Go is about to get a whole lot faster
    4 projects | news.ycombinator.com | 23 Jan 2022
    There is a single contiguous memory allocation, which mirrors itself.

    One thread produces elements and pushes them at the tail (e.g. I/O bytes, in batch), and one thread consumes as many elements as possible in batch from the other end (e.g. all bytes available, in batch).

    The mirror is required to allow processing all elements in the deque as if they were adjacent in memory.

    This is the library i am using, the array contains an explanation : https://github.com/gnzlbg/slice_deque

Stats

Basic slice_deque repo stats
2
150
0.0
over 2 years ago

gnzlbg/slice_deque is an open source project licensed under GNU General Public License v3.0 or later which is an OSI approved license.

The primary programming language of slice_deque is Rust.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com