LruClockCache
cppdataloader
LruClockCache | cppdataloader | |
---|---|---|
8 | 3 | |
59 | 5 | |
- | - | |
5.3 | 10.0 | |
4 months ago | over 1 year ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LruClockCache
- Is 180 million lookups per second performance ok for an asynchronous cache written in C++ running on FX8150? (has cache-coherence and runs only 1 consumer thread as back-end)
-
Is Python Interpreter optimized enough for low-latency caching algorithm?
Is it feasible to write a fast caching library for Python in pure Python codes or does its function calling overhead limit the performance of cache access? What about linking a C++ caching function to Python environment to be called? Does it cause worse latency or better latency than the pure-Python version? (I'm considering converting my C++ caching tool to Python: https://github.com/tugrul512bit/LruClockCache which has performance between 50M - 2B lookups per second depending on use-cases)
- 2D Direct Mapped Cache Is Much Better Than Normal Direct Mapped Cache In 2D Access Patterns
- What is the absolute fastest way of using mmap for read-only random-access pattern?
-
Does C++ have a feature like optionally producing same pointer value from allocation with help of an integer key?
Hi, I implemented a multi-level LRU+direct mapped cache (https://github.com/tugrul512bit/LruClockCache/wiki/How-To-Do-Multithreading-With-a-Read-Only-Multi-Level-Cache) and it works as a single threaded read-write cache or multi-threaded read-only cache. Now I'm going to add cache-coherence to it (so it will be read-write multithreaded) but by using smart pointers as "value" cells. So, a get method will return shared_ptr and I can change its data by dereferencing and it instantly visible on other L1 caches in other threads. But there are some problems.
- Multi-Level Cache (Direct Mapped L1 + LRU approx L2 + guard_locked LRU LLC) does up to 400 million lookups per second in Gaussian Blur operation on FX8150 CPU.
-
Is 20 million lookups per second performance ok for a single threaded LRU cache written in C++? (CPU is fx8150 3.6GHz)
Implementation: https://github.com/tugrul512bit/LruClockCache/blob/main/LruClockCache.h
cppdataloader
What are some alternatives?
fmtlog - fmtlog is a performant fmtlib-style logging library with latency in nanoseconds.
CacheLib - Pluggable in-process caching engine to build and scale high performance services
Olric - Distributed in-memory object store. It can be used as an embedded Go library and a language-independent service.
ScaleStore - This is the source code for our (Tobias Ziegler, Carsten Binnig and Viktor Leis) published paper at SIGMOD’22: ScaleStore: A Fast and Cost-Efficient Storage Engine using DRAM, NVMe, and RDMA.
srs - SRS is a simple, high-efficiency, real-time video server supporting RTMP, WebRTC, HLS, HTTP-FLV, SRT, MPEG-DASH, and GB28181.
ephemera - An In-Memory, Write-Only, Key-Value Cache
srt - Secure, Reliable, Transport
ccache - ccache – a fast compiler cache
leaf - Lightweight Error Augmentation Framework
ustore - Multi-Modal Database replacing MongoDB, Neo4J, and Elastic with 1 faster ACID solution, with NetworkX and Pandas interfaces, and bindings for C 99, C++ 17, Python 3, Java, GoLang 🗄️
quill - Asynchronous Low Latency C++ Logging Library