ristretto
Caffeine
Our great sponsors
ristretto | Caffeine | |
---|---|---|
19 | 43 | |
5,299 | 15,151 | |
1.0% | - | |
6.1 | 9.7 | |
20 days ago | 8 days ago | |
Go | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ristretto
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
1. Unfortunately, ristretto has been showing hit ratio around 0 on almost all traces for a very long time now and the authors don't respond to this in any way. Vitess for example has already changed it to another cache. Here are two issues about it: https://github.com/dgraph-io/ristretto/issues/346 and https://github.com/dgraph-io/ristretto/issues/336. That is, ristretto shows such results even on its own benchmarks. You can see it just by running hit ratio benchmarks on a very simple zipf distribution from the ristretto repository: https://github.com/dgraph-io/ristretto/blob/main/stress_test.... On this test I got the following:
-
S3 Express Is All You Need
That's exactly how Userify[0] used to work. (when it was Python; now that it's a Go app, we do the caching in memory using Ristretto[1]).
0. https://userify.com (team ssh key management/sudo authz)
-
Theine - High performance in-memory cache
I also do some hit ratio benchmarks and Theine's results are much better than Ristretto. See results in README: https://github.com/Yiling-J/theine-go#hit-ratios
-
Python deserves a good in-memory cache library!
If you know Caffeine(Java)/Ristretto(Go)/Moka(Rust), you know what Theine is. Python deserves a good in-memory cache library.
-
VCache: A Simple In-Memory Cache Library
Thanks for sharing. There are a lot of options for embedded in-memory caches: https://github.com/dgraph-io/ristretto https://awesome-go.com/caches/ Do you have any comparisons or details on how your project has a different approach?
-
Cacheme: Asyncio cache framework with multiple storages and thundering herd protection
I made Cacheme years ago, which support redis and synchronous API only. Then I switch to Go and found that there are some awesome cache projects in Go(ristretto, gocache...), I also made my own Cacheme go version: cacheme-go. After trying asyncio and type hint, I think it's time to rewrite my old Cacheme.
-
Show HN: Zcached, in-memory key-value cache wire-compatible with memcached
zcached is an in-memory key-value cache exposing a memcached ASCII protocol-compatible interface, built on pluggable cache engines like Ristretto and freecache [0].
It's not performance-competitive with memcached, especially at higher thread counts. That said, it achieves about 1.1M ops/s, but at significantly higher P99 and P999 latency (as measured by memtier). See [1] and [2] for benchmark results from my 7950x-based workstation.
Disclaimer: This is a hobby project created for fun while hacking over the holidays. zcached is not a commercial product and never will be. Don't use it in production; consider this a technology demo more than anything.
I don't expect the source code to build outside of my environment, but for those interested in playing with it, binary artifacts are available at [3]. Try `zcached --address tcp:localhost:11211`.
[0] https://github.com/dgraph-io/ristretto, https://github.com/coocood/freecache
- What is the coolest Go open source projects you have seen?
-
Quitting Dgraph Labs
While I never used dgraph, I do use badger and ristretto and am similarly in a bind over their long-term survival (moreso badger than ristretto)...
-
Recommendation for Key/Value storage
There are also different packages used as a wrapper on top of the Go map based on what your requirements are (storing a lot of data) https://github.com/allegro/bigcache or (need performance) https://github.com/dgraph-io/ristretto. For basic use-cases, the standard Go map should be enough. Just keep in mind whether you need concurrent access to your data structure, in which case you should guard your map with a mutex .
Caffeine
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
My implementation is linked from the official S3-FIFO page. The benchmarks are as follows.
https://github.com/ben-manes/caffeine/wiki/Efficiency
I can't deal with you any longer. Over.
/u/someplaceguy,
Those LIRS traces, along with many others, available at this page [1]. I did a cursory review using their traces using Caffeine's and the author's simulators to avoid bias or a mistaken implementation. In their target workloads Caffeine was on par or better [2]. I have not seen anything novel in this or their previous works and find their claims to be easily disproven, so I have not implement this policy in Caffeine simulator yet.
-
Google/guava: Google core libraries for Java
That, and also when caffeine came out it replaced one of the major uses (caching) of guava.
-
GC, hands off my data!
I decided to start with an overview of what open-source options are currently available. When it comes to the implementation of the on-heap cache mechanism, the options are numerous – there is well known: guava, ehcache, caffeine and many other solutions. However, when I began researching cache mechanisms offering the possibility of storing data outside GC control, I found out that there are very few solutions left. Out of the popular ones, only Terracotta is supported. It seems that this is a very niche solution and we do not have many options to choose from. In terms of less-known projects, I came across Chronicle-Map, MapDB and OHC. I chose the last one because it was created as part of the Cassandra project, which I had some experience with and was curious about how this component worked:
-
Spring Cache with Caffeine
Visit the official Caffeine git project and documentation here for more information if you are interested in the subject.
-
Helidon Níma is the first Java microservices framework based on virtual threads
not to distract from your valid points but, when used properly, Caffeine + Reactor can work together really nicely [1].
[1] https://github.com/ben-manes/caffeine/tree/master/examples/c...
-
FIFO-Reinsertion is better than LRU [pdf]
I wonder why all these papers ignore comparison against W-TinyLFU.
https://github.com/ben-manes/caffeine/wiki/Efficiency Shows that it really outperforms ARC as well and they also have an optimal oracle version that they evaluate against to show how much room there is left (admittedly the oracle version itself implies you’re picking some global criterion to optimize but that’s trickier when in reality there are multiple axes along which to optimize and you can’t simultaneously do well across all of them).
Yes, I think that is my main concern in that often research papers do not disclose the weaknesses of their approaches and the opposing tradeoffs. There is no silver bullet.
The stress workload that I use is to chain corda-large [1], 5x loop [2], corda-large at a cache size of 512 entries and 6M requests. This shifts from a strongly LRU-biased pattern to an MRU one, and then back again. My solution to this was to use hill climbing by sampling the hit rate to adaptively size of the admission window (aka your FIFO) to reconfigure the cache region sizes. You already have similar code in your CACHEUS implementation which built on that idea to apply it to a multi-agent policy.
Caffeine adjusts the frequency comparison for admission slightly to allow ~1% of losing warm candidates to enter the main region. This is to protect against hash flooding attack (HashDoS) [3]. That isn't intended to improve or correct the policy's decision making so should be unrelated to your observations, but an important change for real-world usage.
I believe LIRS2 [4] adaptively sizes their LIR region, but I do not recall the details as a complex algorithm. It did very well across different workloads when I tried it out and the authors were able to make a few performance fixes based on my feedback. Unfortunately I find LIRS algorithms to be too difficult to maintain for an industry setting because while excellent, the implementation logic is not intuitive which makes it frustrating to debug.
[1] https://github.com/ben-manes/caffeine/blob/master/simulator/...
-
Guava 32.0 (released today) and the @Beta annotation
A lot of Guava's most popular libraries graduated to the JDK. Also Caffeine is the evolution of our c.g.common.cache library. So you need Guava less than you used to. Hooray!
-
Apache Baremaps: online maps toolkit
Unfortunately, I don't gather statistics on the demonstration server. I believe that the in-memory caffeine cache (https://github.com/ben-manes/caffeine) saved me.
What are some alternatives?
Ehcache - Ehcache 3.x line
Hazelcast - Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
cache2k - Lightweight, high performance Java caching
go-cache-benchmark - Cache benchmark for Golang
Apache Geode - Apache Geode
Guava - Google core libraries for Java
BigCache - Efficient cache for gigabytes of data written in Go.
stretto - Stretto is a Rust implementation for Dgraph's ristretto (https://github.com/dgraph-io/ristretto). A high performance memory-bound Rust cache.
moka - A high performance concurrent caching library for Rust
scaffeine - Thin Scala wrapper for Caffeine (https://github.com/ben-manes/caffeine)
SQLDelight - SQLDelight - Generates typesafe Kotlin APIs from SQL
parquet-go - Go library to read/write Parquet files