Does C++ have a feature like optionally producing same pointer value from allocation with help of an integer key?

This page summarizes the projects mentioned and recommended in the original post on /r/cpp_questions

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • LruClockCache

    A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.

  • Hi, I implemented a multi-level LRU+direct mapped cache (https://github.com/tugrul512bit/LruClockCache/wiki/How-To-Do-Multithreading-With-a-Read-Only-Multi-Level-Cache) and it works as a single threaded read-write cache or multi-threaded read-only cache. Now I'm going to add cache-coherence to it (so it will be read-write multithreaded) but by using smart pointers as "value" cells. So, a get method will return shared_ptr and I can change its data by dereferencing and it instantly visible on other L1 caches in other threads. But there are some problems.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Is 180 million lookups per second performance ok for an asynchronous cache written in C++ running on FX8150? (has cache-coherence and runs only 1 consumer thread as back-end)

    1 project | /r/programming | 14 Feb 2022
  • Is Python Interpreter optimized enough for low-latency caching algorithm?

    1 project | /r/Python | 8 Feb 2022
  • 2D Direct Mapped Cache Is Much Better Than Normal Direct Mapped Cache In 2D Access Patterns

    1 project | /r/cpp | 24 Oct 2021
  • What is the absolute fastest way of using mmap for read-only random-access pattern?

    1 project | /r/cpp_questions | 14 Oct 2021
  • Multi-Level Cache (Direct Mapped L1 + LRU approx L2 + guard_locked LRU LLC) does up to 400 million lookups per second in Gaussian Blur operation on FX8150 CPU.

    1 project | /r/cpp | 8 Oct 2021