xsync
golang-fifo
xsync | golang-fifo | |
---|---|---|
7 | 4 | |
917 | 113 | |
- | - | |
5.5 | 8.6 | |
about 2 months ago | about 1 month ago | |
Go | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xsync
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
The issue is Go stdlib does not have parallel hash map.
We have https://github.com/puzpuzpuz/xsync#map a different Cache line hashmap impl.
- Are there any actively maintained or official Golang libraries for managing work queues?
-
Thread-Local State in Go, Huh?
I've created a pull request to decrease the memory footprint and get rid of the unlucky distribution problem. Goroutines (think, threads) now self-organize: they detect contention via a failed CAS and change the stripe. Going to update the article accordingly to avoid confusion.
-
So long, sync.Map
Could you check the method godoc and the example in this draft PR? I'm going to finalize the PR this weekend and it would be great to hear your opinion.
- puzpuzpuz/xsync: Concurrent data structures for Go. An extension for the standard sync package.
golang-fifo
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
Hello, Thank you for replying here :)
Many of answers you replied are reasonable and good.
And I just want to add more comments for others.
1. SIEVE is not scan-resistant, so that, I think it should only be applied for web cache workloads (typlically follows power-law distribution)
2. SIEVE is somewhat scalalbe for read-intensive applications (e.g. blog, shop and etc), because it doesn't require to hold a lock on cahce hit.
3. The purpose of golang-fifo is to provide simple and efficient cache implementation (e.g. hashicorp-lru, groupcache)
4. when increasing contention otter sacrifices 1-2 percent
-> I think that the statement is incorrect. The hit rate varies depending on the total number of objects and the size of the cache, so it should be compared relatively. for example, otter's efficiency decreased by 5% compared to single-threaded when lock contention increased (decreased efficiency makes a mean network latency higher, because it may need to conduct heavy operation e.g. re-calculation, database access and so on)
5. ghost queue : honetly at that time of writing the code, I didn't deep dive into the bucket table implementation, it may not work same as actual bucket hash table (see here: https://github.com/scalalang2/golang-fifo/issues/16)
-
golang-fifo | Modern cache eviction algorithm implementations.
I'm also implementing cache algorithms, introduced in papers, in golang.you can visit here. Your contribution would be greatly appreciated.
What are some alternatives?
taskq - Golang asynchronous task/job queue with Redis, SQS, IronMQ, and in-memory backends
otter - A high performance lockless cache for Go.
Tasqueue - A simple, customisable distributed job/worker in Go
libCacheSim - a high performance library for building cache simulators
theine-go - high performance in-memory cache
Faktory - Language-agnostic persistent background job server
ristretto - A high performance memory-bound Go cache
machinery - Machinery is an asynchronous task queue/job queue based on distributed message passing.
go - The Go programming language
NATS - High-Performance server for NATS.io, the cloud and edge native messaging system.
goque - Persistent stacks and queues for Go backed by LevelDB