stretto
ttl_cache
stretto | ttl_cache | |
---|---|---|
6 | 1 | |
397 | 54 | |
- | - | |
5.7 | 0.0 | |
5 days ago | over 2 years ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stretto
-
Stretto 0.5.0 release: Support runtime agnostic AsyncCache
Hi, I think this link is a good explanation https://github.com/al8n/stretto/pull/7
-
Writing a concurrent LRU cache
Ya, I saw concache but I looked into it and it doesn't implement what is needed. Each bucket has its own linked-list backing (hence "lock-free linked list buckets"). An LRU needs each value in each bucket to be part of one linked list I believe. After posting this I realized my line of research was failing because it was state of the art five years ago. Caffeine replaced `concurrentlinkedhashmap` in the java world (by the same author). A rust version of that is Moka. These are much more complicated than a concurrent LRU but faster (aka more state of the art). Another rust crate is Stretto which is a port of dgraph's Ristretto (in go). The question becomes is it worth it to essentially port `concurrentlinkedhashmap` to have a great concurrent LRU when there are more state of the art caches out there.
-
Stretto - a thread-safe, high-performance, high hit-ratio cache.
For the case in the benches folder(a very roughly bench case), stretto is around 20 - 30 ms(sync version is around 30 - 40 ms) faster than moka, for 120, 000+ operations. I set stretto to collect metrics when benching, collecting metrics will make around 10% overhead. Moka seems not to provide a configuration to collect the metrics, so the hit-ratio is not compared.
ttl_cache
-
concurrency alternatives
A sample use case is: accessing a ttl_cache instance from multiple threads.
What are some alternatives?
ristretto - A high performance memory-bound Go cache
cached - Rust cache structures and easy function memoization
moka - A high performance concurrent caching library for Rust
askama - Type-safe, compiled Jinja-like templates for Rust
rust-memcache - memcache client for rust
hitbox - A high-performance caching framework suitable for single-machine and for distributed applications in Rust
dashmap - Blazing fast concurrent HashMap for Rust.
tera - A template engine for Rust based on Jinja2/Django
bitsock - Safe Rust crate for creating socket servers and clients with ease.
juniper - GraphQL server library for Rust
bmemcached-rs - Rust binary memcached implementation
grex - A command-line tool and Rust library with Python bindings for generating regular expressions from user-provided test cases