cachegrand
amzn-drivers
Our great sponsors
cachegrand | amzn-drivers | |
---|---|---|
24 | 4 | |
962 | 440 | |
- | 0.9% | |
8.0 | 9.2 | |
6 months ago | about 1 month ago | |
C | C | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cachegrand
-
C++ caching library with tiering (RAM + disc)
Closest that comes to my mind is CacheGrand. It doesn’t have some of the features yet, but I believe @daniele_dll is working on it!
-
[PC][Switzerland] Cheap Rackspace
I use this HW for benchmarking and testing my open source project cachegrand ( https://github.com/danielealbano/cachegrand)
- cachegrand
-
Cachegrand, a fast, Redis compatible, KV store – hashtable documentation
https://github.com/danielealbano/cachegrand/blob/main/docs/a...
When tested with memtier_benchmark, using the Redis protocol, cachegrand itself, on the benchmarking hardware, thanks to the implemented hashtable can reach up to 5 million GET op/s and up to 4.5 million UPSERT op/s without batching, with it up to 60 million GET op/s and up to 26 million UPSERT op/s!
- cachegrand - a blazing fast, Redis compatible, Key-Value store builf for today's hardware - hashtable documentation - capable of delivering up to 112 GET mop/s and 85 UPSERT mop/s on a EPYC 7502P
- Show HN: Cachegrand – a fast OSS Key-Value store built for modern hardware
- Cachegrand – a modern OSS Key-Value store built for today's hardware
amzn-drivers
-
Looking for programmer volunteers who want to contribute/learn about low level C++, Linux, Networking, high frequency trading.
Amazon (AWS) cloud EC2 instance specific role (Kernel and User space networking, linux OS related). Amazon has it's own network card with it's own linux driver (open source), for user space they use DPDK (open source). https://github.com/amzn/amzn-drivers I've measured the time between calling tcp send in software, and packet leaving the NIC (network card), it is around ~50 microseconds latency, aws also stated in a paper it is around that number. Goals:- Figure out the way to build from source code and load the kernel.- Reduce latency
-
FreeBSD optimizations used by Netflix to serve video at 800Gb/s [pdf]
It means, for example, writing a FreeBSD kernel driver for Elastic Network Adapter (ENA). Both Linux kernel driver and FreeBSD kernel driver is available at https://github.com/amzn/amzn-drivers
-
Dragonflydb – A modern replacement for Redis and Memcached
Of course, there are.
I was mostly running on AWS. In terms of hardware, for small packets loadtests most systems are constrained on throughput, i.e. number of packets per second. Some systems saturate on interrupts reaching 100% CPU on all cores and some can not even saturate the CPU and you will see that CPU is at 60% but you can not go beyond some limit. Best systems networkwise are c6gn family types. They are also better than other cloud provide. btw, you mentioned hypervisors... About 8 months ago I opened a bug on AWS Graviton team https://github.com/amzn/amzn-drivers/issues/195 - about performance issue they had on their instances at high throughput. Recently they issued the fix. I suspect it was in their hypervisor.
In terms of my software I found many performance bugs at those speeds. For example, using a default allocator is a big no. I use mimalloc for uncontended allocations. In general, you can not use mutexes and spinlocks at those speeds. Those will just cripple the system. Sometimes it can be very annoying since you can not rely on a 3rd party library without carefully analyzing its design. For example, I could not use openmetrics c++ library because it was not performant enough. Even to implement a simple counter, say to gather statistics for INFO command becomes an interesting engineering problem:
-
Ask HN: Anybody enabled IOMMU on AWS metal servers?
https://doc.dpdk.org/guides/nics/ena.html
and:
https://github.com/amzn/amzn-drivers/tree/master/userspace/dpdk/enav2-vfio-patch
Enabling IOMMU on i3 or c5 metal instances is as easy as adding "iommu=1 intel_iommu=on" to /etc/default/grub followed by update-grub, reboot.
I can't get this to work. Everything I update grub and reboot I cannot re-connected via ssh. Also EC2 console fails to get good status.
My config:
Ubuntu 20.04 stock AWS AMI x86 64-bit
What are some alternatives?
dragonfly - A modern replacement for Redis and Memcached
neon - Neon: Serverless Postgres. We separated storage and compute to offer autoscaling, branching, and bottomless storage.
varnish-cache - Varnish Cache source code repository
examples - Example data structures and algorithms
helio - A modern framework for backend development based on io_uring Linux interface
midi-redis - A toy memory store with great performance
async-std - Async version of the Rust standard library
webdis - A Redis HTTP interface with JSON output
vitess - Vitess is a database clustering system for horizontal scaling of MySQL.