Our great sponsors
-
LruClockCache
A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Is it feasible to write a fast caching library for Python in pure Python codes or does its function calling overhead limit the performance of cache access? What about linking a C++ caching function to Python environment to be called? Does it cause worse latency or better latency than the pure-Python version? (I'm considering converting my C++ caching tool to Python: https://github.com/tugrul512bit/LruClockCache which has performance between 50M - 2B lookups per second depending on use-cases)
Related posts
- Is 180 million lookups per second performance ok for an asynchronous cache written in C++ running on FX8150? (has cache-coherence and runs only 1 consumer thread as back-end)
- 2D Direct Mapped Cache Is Much Better Than Normal Direct Mapped Cache In 2D Access Patterns
- What is the absolute fastest way of using mmap for read-only random-access pattern?
- Does C++ have a feature like optionally producing same pointer value from allocation with help of an integer key?
- Multi-Level Cache (Direct Mapped L1 + LRU approx L2 + guard_locked LRU LLC) does up to 400 million lookups per second in Gaussian Blur operation on FX8150 CPU.