xsync
sosp23-s3fifo
xsync | sosp23-s3fifo | |
---|---|---|
7 | 2 | |
917 | 83 | |
- | - | |
5.5 | 6.6 | |
about 2 months ago | 7 months ago | |
Go | C | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xsync
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
The issue is Go stdlib does not have parallel hash map.
We have https://github.com/puzpuzpuz/xsync#map a different Cache line hashmap impl.
- Are there any actively maintained or official Golang libraries for managing work queues?
-
Thread-Local State in Go, Huh?
I've created a pull request to decrease the memory footprint and get rid of the unlucky distribution problem. Goroutines (think, threads) now self-organize: they detect contention via a failed CAS and change the stripe. Going to update the article accordingly to avoid confusion.
-
So long, sync.Map
Could you check the method godoc and the example in this draft PR? I'm going to finalize the PR this weekend and it would be great to hear your opinion.
- puzpuzpuz/xsync: Concurrent data structures for Go. An extension for the standard sync package.
sosp23-s3fifo
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
We observed that quick demotion[2] is important to achieve a low miss ratio in modern cache workloads, and existing algorithms such as TinyLFU and LIRS have lower miss ratios because of the small 1% window they use. This motivated us to design S3-FIFO, which uses simple FIFO queues to achieve low miss ratios. It is true that compared to state-of-the-art, S3-FIFO does not use any fancy techniques, but this does not mean it has bad performance.
In our large-scale evaluations, we found that the fancy techniques in LIRS, ARC, and TinyLFU can sometimes increase the miss ratio. But simple FIFO queues are more robust. However, *it is not true that S3-FIFO is better on every trace*.
* Note that some of the S3-FIFO results in Otter's repo are not updated and have an implementation bug, and we are working with the owner to update them.
[1] https://github.com/Thesys-lab/sosp23-s3fifo?tab=readme-ov-fi...
What are some alternatives?
taskq - Golang asynchronous task/job queue with Redis, SQS, IronMQ, and in-memory backends
libCacheSim - a high performance library for building cache simulators
Tasqueue - A simple, customisable distributed job/worker in Go
otter - A high performance lockless cache for Go.
Faktory - Language-agnostic persistent background job server
machinery - Machinery is an asynchronous task queue/job queue based on distributed message passing.
go - The Go programming language
NATS - High-Performance server for NATS.io, the cloud and edge native messaging system.
goque - Persistent stacks and queues for Go backed by LevelDB
theine-go - high performance in-memory cache
golang-fifo - Modern efficient cache design with simple FIFO queue only in Golang