-
LruClockCache
A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Hi, I implemented a multi-level LRU+direct mapped cache (https://github.com/tugrul512bit/LruClockCache/wiki/How-To-Do-Multithreading-With-a-Read-Only-Multi-Level-Cache) and it works as a single threaded read-write cache or multi-threaded read-only cache. Now I'm going to add cache-coherence to it (so it will be read-write multithreaded) but by using smart pointers as "value" cells. So, a get method will return shared_ptr and I can change its data by dereferencing and it instantly visible on other L1 caches in other threads. But there are some problems.
Related posts
-
Is 180 million lookups per second performance ok for an asynchronous cache written in C++ running on FX8150? (has cache-coherence and runs only 1 consumer thread as back-end)
-
Is Python Interpreter optimized enough for low-latency caching algorithm?
-
2D Direct Mapped Cache Is Much Better Than Normal Direct Mapped Cache In 2D Access Patterns
-
What is the absolute fastest way of using mmap for read-only random-access pattern?
-
Multi-Level Cache (Direct Mapped L1 + LRU approx L2 + guard_locked LRU LLC) does up to 400 million lookups per second in Gaussian Blur operation on FX8150 CPU.