LruClockCache
quill
LruClockCache | quill | |
---|---|---|
8 | 3 | |
59 | 1,056 | |
- | - | |
5.3 | 8.5 | |
4 months ago | 18 days ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LruClockCache
- Is 180 million lookups per second performance ok for an asynchronous cache written in C++ running on FX8150? (has cache-coherence and runs only 1 consumer thread as back-end)
-
Is Python Interpreter optimized enough for low-latency caching algorithm?
Is it feasible to write a fast caching library for Python in pure Python codes or does its function calling overhead limit the performance of cache access? What about linking a C++ caching function to Python environment to be called? Does it cause worse latency or better latency than the pure-Python version? (I'm considering converting my C++ caching tool to Python: https://github.com/tugrul512bit/LruClockCache which has performance between 50M - 2B lookups per second depending on use-cases)
- 2D Direct Mapped Cache Is Much Better Than Normal Direct Mapped Cache In 2D Access Patterns
- What is the absolute fastest way of using mmap for read-only random-access pattern?
-
Does C++ have a feature like optionally producing same pointer value from allocation with help of an integer key?
Hi, I implemented a multi-level LRU+direct mapped cache (https://github.com/tugrul512bit/LruClockCache/wiki/How-To-Do-Multithreading-With-a-Read-Only-Multi-Level-Cache) and it works as a single threaded read-write cache or multi-threaded read-only cache. Now I'm going to add cache-coherence to it (so it will be read-write multithreaded) but by using smart pointers as "value" cells. So, a get method will return shared_ptr and I can change its data by dereferencing and it instantly visible on other L1 caches in other threads. But there are some problems.
- Multi-Level Cache (Direct Mapped L1 + LRU approx L2 + guard_locked LRU LLC) does up to 400 million lookups per second in Gaussian Blur operation on FX8150 CPU.
-
Is 20 million lookups per second performance ok for a single threaded LRU cache written in C++? (CPU is fx8150 3.6GHz)
Implementation: https://github.com/tugrul512bit/LruClockCache/blob/main/LruClockCache.h
quill
-
Easy logging A logging system for c++20
For high performance logging, I'd add quill to that list.
- quill v2.7.0 released - Asynchronous Low Latency C++ Logging Library
What are some alternatives?
fmtlog - fmtlog is a performant fmtlib-style logging library with latency in nanoseconds.
spdlog - Fast C++ logging library.
Olric - Distributed in-memory object store. It can be used as an embedded Go library and a language-independent service.
srs - SRS is a simple, high-efficiency, real-time video server supporting RTMP, WebRTC, HLS, HTTP-FLV, SRT, MPEG-DASH, and GB28181.
easyloggingpp - C++ logging library. It is extremely powerful, extendable, light-weight, fast performing, thread and type safe and consists of many built-in features. It provides ability to write logs in your own customized format. It also provide support for logging your classes, third-party libraries, STL and third-party containers etc.
srt - Secure, Reliable, Transport
glog - C++ implementation of the Google logging module
leaf - Lightweight Error Augmentation Framework
Boost.Log - Boost Logging library
cppdataloader - cppdataloader is a batching and caching library for C++17
G3log - G3log is an asynchronous, "crash safe", logger that is easy to use with default logging sinks or you can add your own. G3log is made with plain C++14 (C++11 support up to release 1.3.2) with no external libraries (except gtest used for unit tests). G3log is made to be cross-platform, currently running on OSX, Windows and several Linux distros. See Readme below for details of usage.