midi-redis
amzn-drivers
midi-redis | amzn-drivers | |
---|---|---|
1 | 4 | |
27 | 441 | |
- | 0.7% | |
0.0 | 9.1 | |
almost 2 years ago | 16 days ago | |
C++ | C | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
midi-redis
-
Dragonflydb – A modern replacement for Redis and Memcached
Yes, helio is the library that allows you to build c++ backends easily similar to Seastar. Unlike Seastar that is designed as futures and continuations library, helio uses fibers which I think simpler to use and reason about. I've wrote a few blog posts a while ago about fibers and Seastar: https://www.romange.com/2018/07/12/seastar-asynchronous-c-fr... one of them. You will see there a typical Seastar flow with continuations. I just do not like this style and I think C++ is not a good fit for it. Having said that, I do think Seastar is 5-star framework and the team behind it are all superstars. I learned about shared-nothing architecture from Seastar.
Re helio: You will find examples folder inside the projects with sample backends: echo_server and pingpong_server. Both are similar but the latter speaks RESP. I also implemented a toy midi-redis project https://github.com/romange/midi-redis which is also based on helio.
In fact dragonfly evolved from it.
amzn-drivers
-
Looking for programmer volunteers who want to contribute/learn about low level C++, Linux, Networking, high frequency trading.
Amazon (AWS) cloud EC2 instance specific role (Kernel and User space networking, linux OS related). Amazon has it's own network card with it's own linux driver (open source), for user space they use DPDK (open source). https://github.com/amzn/amzn-drivers I've measured the time between calling tcp send in software, and packet leaving the NIC (network card), it is around ~50 microseconds latency, aws also stated in a paper it is around that number. Goals:- Figure out the way to build from source code and load the kernel.- Reduce latency
-
FreeBSD optimizations used by Netflix to serve video at 800Gb/s [pdf]
It means, for example, writing a FreeBSD kernel driver for Elastic Network Adapter (ENA). Both Linux kernel driver and FreeBSD kernel driver is available at https://github.com/amzn/amzn-drivers
-
Dragonflydb – A modern replacement for Redis and Memcached
Of course, there are.
I was mostly running on AWS. In terms of hardware, for small packets loadtests most systems are constrained on throughput, i.e. number of packets per second. Some systems saturate on interrupts reaching 100% CPU on all cores and some can not even saturate the CPU and you will see that CPU is at 60% but you can not go beyond some limit. Best systems networkwise are c6gn family types. They are also better than other cloud provide. btw, you mentioned hypervisors... About 8 months ago I opened a bug on AWS Graviton team https://github.com/amzn/amzn-drivers/issues/195 - about performance issue they had on their instances at high throughput. Recently they issued the fix. I suspect it was in their hypervisor.
In terms of my software I found many performance bugs at those speeds. For example, using a default allocator is a big no. I use mimalloc for uncontended allocations. In general, you can not use mutexes and spinlocks at those speeds. Those will just cripple the system. Sometimes it can be very annoying since you can not rely on a 3rd party library without carefully analyzing its design. For example, I could not use openmetrics c++ library because it was not performant enough. Even to implement a simple counter, say to gather statistics for INFO command becomes an interesting engineering problem:
-
Ask HN: Anybody enabled IOMMU on AWS metal servers?
https://doc.dpdk.org/guides/nics/ena.html
and:
https://github.com/amzn/amzn-drivers/tree/master/userspace/dpdk/enav2-vfio-patch
Enabling IOMMU on i3 or c5 metal instances is as easy as adding "iommu=1 intel_iommu=on" to /etc/default/grub followed by update-grub, reboot.
I can't get this to work. Everything I update grub and reboot I cannot re-connected via ssh. Also EC2 console fails to get good status.
My config:
Ubuntu 20.04 stock AWS AMI x86 64-bit
What are some alternatives?
dragonfly - A modern replacement for Redis and Memcached
cachegrand - cachegrand - a modern data ingestion, processing and serving platform built for today's hardware
neon - Neon: Serverless Postgres. We separated storage and compute to offer autoscaling, branching, and bottomless storage.
webdis - A Redis HTTP interface with JSON output
helio - A modern framework for backend development based on io_uring Linux interface
Aerospike - Aerospike Database Server – flash-optimized, in-memory, nosql database
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
vitess - Vitess is a database clustering system for horizontal scaling of MySQL.