hamt
Caffeine
hamt | Caffeine | |
---|---|---|
7 | 43 | |
261 | 15,252 | |
- | - | |
6.9 | 9.7 | |
3 months ago | 11 days ago | |
C | Java | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hamt
-
Visual Introduction to Hash-Array Mapped Tries (HAMTs)
This isn't a very good explanation. The wikipedia article isn't great either. I like this description:
https://github.com/mkirchner/hamt#persistent-hash-array-mapp...
The name does tell you quite a bit about what these are:
* Hash - rather than directly using the keys to navigate the structure, the keys are hashed, and the hashes are used for navigation. This turns potentially long, poorly-distributed keys into short, well-distributed keys. However, that does mean you have to compute a hash on every access, and have to deal with hash collisions. The mkirchner implementation above calls collisions "hash exhaustion", and deals with them using some generational hashing scheme. I think i'd fall back to collision lists until that was conclusively proven to be too slow.
* Trie - the tree is navigated by indexing nodes using chunks of the (hash of the) key, rather than comparing the keys in the node
* Array mapped - sparse nodes are compressed, using a bitmap to indicate which logical slots are occupied, and then only storing those. The bitmaps live in the parent node, rather than the node itself, i think? Presumably helps with fetching.
A HAMT contains a lot of small nodes. If every entry is a bitmap plus a pointer, then it's two words, and if we use five-bit chunks, then each node can be up to 32 entries, but i would imagine the majority are small, so a typical node might be 64 bytes. I worry that doing a malloc for each one would end up with a lot of overhead. Are HAMTs often implemented with some more custom memory management? Can you allocate a big block and then carve it up?
Could you do a slightly relaxed HAMT where nodes are not always fully compact, but sized to the smallest suitable power of two entries? That might let you use some sort of buddy allocation scheme. It would also let you insert and delete without having to reallocate the node. Although i suppose you can already do that by mapping a few empty slots.
- Show HN: A hash array-mapped trie implementation in C
- Ask HN: What are some 'cool' but obscure data structures you know about?
Caffeine
-
Otter, Fastest Go in-memory cache based on S3-FIFO algorithm
/u/someplaceguy,
Those LIRS traces, along with many others, available at this page [1]. I did a cursory review using their traces using Caffeine's and the author's simulators to avoid bias or a mistaken implementation. In their target workloads Caffeine was on par or better [2]. I have not seen anything novel in this or their previous works and find their claims to be easily disproven, so I have not implement this policy in Caffeine simulator yet.
[1]: https://github.com/ben-manes/caffeine/wiki/Simulator
[2]: https://github.com/1a1a11a/libCacheSim/discussions/20
-
Google/guava: Google core libraries for Java
That, and also when caffeine came out it replaced one of the major uses (caching) of guava.
https://github.com/ben-manes/caffeine
-
GC, hands off my data!
I decided to start with an overview of what open-source options are currently available. When it comes to the implementation of the on-heap cache mechanism, the options are numerous – there is well known: guava, ehcache, caffeine and many other solutions. However, when I began researching cache mechanisms offering the possibility of storing data outside GC control, I found out that there are very few solutions left. Out of the popular ones, only Terracotta is supported. It seems that this is a very niche solution and we do not have many options to choose from. In terms of less-known projects, I came across Chronicle-Map, MapDB and OHC. I chose the last one because it was created as part of the Cassandra project, which I had some experience with and was curious about how this component worked:
-
Spring Cache with Caffeine
Visit the official Caffeine git project and documentation here for more information if you are interested in the subject.
-
Helidon Níma is the first Java microservices framework based on virtual threads
not to distract from your valid points but, when used properly, Caffeine + Reactor can work together really nicely [1].
[1] https://github.com/ben-manes/caffeine/tree/master/examples/c...
-
FIFO-Reinsertion is better than LRU [pdf]
Yes, I think that is my main concern in that often research papers do not disclose the weaknesses of their approaches and the opposing tradeoffs. There is no silver bullet.
The stress workload that I use is to chain corda-large [1], 5x loop [2], corda-large at a cache size of 512 entries and 6M requests. This shifts from a strongly LRU-biased pattern to an MRU one, and then back again. My solution to this was to use hill climbing by sampling the hit rate to adaptively size of the admission window (aka your FIFO) to reconfigure the cache region sizes. You already have similar code in your CACHEUS implementation which built on that idea to apply it to a multi-agent policy.
Caffeine adjusts the frequency comparison for admission slightly to allow ~1% of losing warm candidates to enter the main region. This is to protect against hash flooding attack (HashDoS) [3]. That isn't intended to improve or correct the policy's decision making so should be unrelated to your observations, but an important change for real-world usage.
I believe LIRS2 [4] adaptively sizes their LIR region, but I do not recall the details as a complex algorithm. It did very well across different workloads when I tried it out and the authors were able to make a few performance fixes based on my feedback. Unfortunately I find LIRS algorithms to be too difficult to maintain for an industry setting because while excellent, the implementation logic is not intuitive which makes it frustrating to debug.
[1] https://github.com/ben-manes/caffeine/blob/master/simulator/...
-
Guava 32.0 (released today) and the @Beta annotation
A lot of Guava's most popular libraries graduated to the JDK. Also Caffeine is the evolution of our c.g.common.cache library. So you need Guava less than you used to. Hooray!
- Monitoring Guava Cache Statistics
-
Apache Baremaps: online maps toolkit
Unfortunately, I don't gather statistics on the demonstration server. I believe that the in-memory caffeine cache (https://github.com/ben-manes/caffeine) saved me.
-
Similar probabilistic algorithms like Hyperloglog?
Caffeine is a Java cache that uses a 4-bit count-min sketch to estimate the popularity of an entry over a sample period. This is used by an admission filter (TinyLFU) to determine whether the new arrival is more valuable than the LRU victim. This is combined with hill climbing to optimize how much space is allocated for frequency vs recency. That results in an adaptive eviction policy that is space and time efficient, and achieves very high hit rates.
What are some alternatives?
AspNetCoreDiagnosticScenarios - This repository has examples of broken patterns in ASP.NET Core applications
Ehcache - Ehcache 3.x line
multiversion-concurrency-contro
Hazelcast - Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
RVS_Generic_Swift_Toolbox - A Collection Of Various Swift Tools, Like Extensions and Utilities
cache2k - Lightweight, high performance Java caching
multiversion-concurrency-control - Implementation of multiversion concurrency control, Raft, Left Right concurrency Hashmaps and a multi consumer multi producer Ringbuffer, concurrent and parallel load-balanced loops, parallel actors implementation in Main.java, Actor2.java and a parallel interpreter
Apache Geode - Apache Geode
CPython - The Python programming language
Guava - Google core libraries for Java
pyroscope - Continuous Profiling Platform. Debug performance issues down to a single line of code [Moved to: https://github.com/grafana/pyroscope]
scaffeine - Thin Scala wrapper for Caffeine (https://github.com/ben-manes/caffeine)