Is Python Interpreter optimized enough for low-latency caching algorithm?

This page summarizes the projects mentioned and recommended in the original post on /r/Python

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • LruClockCache

    A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.

  • Is it feasible to write a fast caching library for Python in pure Python codes or does its function calling overhead limit the performance of cache access? What about linking a C++ caching function to Python environment to be called? Does it cause worse latency or better latency than the pure-Python version? (I'm considering converting my C++ caching tool to Python: https://github.com/tugrul512bit/LruClockCache which has performance between 50M - 2B lookups per second depending on use-cases)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts