picohttpparser
seastar
picohttpparser | seastar | |
---|---|---|
3 | 1 | |
1,785 | 7 | |
0.7% | - | |
4.2 | 0.0 | |
2 months ago | almost 2 years ago | |
C | C++ | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
picohttpparser
- Ask HN: Resources for Building a Webserver in C?
-
Linux Kernel vs. DPDK: HTTP Performance Showdown
Yea, it is definitely a fake HTTP server which I acknowledge in the article [1]. However based on the size of the requests, and my observation of the number of packets per second being symmetrical at the network interface level, I didn't have a concern about doubled responses.
Skipping the parsing of the HTTP requests definitely gives a performance boost, but for this comparison both sides got the same boost, so I didn't mind being less strict. Seastar's HTTP parser was being finicky, so I chose the easy route and just removed it from the equation.
For reference though, in my previous post[2] libreactor was able to hit 1.2M req/s while fully parsing the HTTP requests using picohttpparser[3]. But that is still a very simple and highly optimized implementation. From what I recall when I played with disabling HTTP parsing in libreactor I got a performance boost of about 5%.
1. https://talawah.io/blog/linux-kernel-vs-dpdk-http-performanc...
2. https://talawah.io/blog/extreme-http-performance-tuning-one-...
3. https://github.com/h2o/picohttpparser
-
JS faster than Rust?
Just-js is not nodejs framework. It's sperate runtime and most of the http code is written using c/c++ (for example headers parsing logic is written using c and is using https://github.com/h2o/picohttpparser which is c library)
seastar
-
Linux Kernel vs. DPDK: HTTP Performance Showdown
Hi talawahtech. Thanks for the exhaustive article.
I took a short look at the benchmark setup (https://github.com/talawahtech/seastar/blob/http-performance...), and wonder if some simplifications there lead to overinflated performance numbers. The server here executes a single read() on the connection - and as soon as it receives any data it sends back headers. A real world HTTP server needs to read data until all header and body data is consumed before responding.
Now given the benchmark probably sends tiny requests, the server might get everything in a single buffer. However every time it does not, the server will send back two responses to the server - and at that time the client will already have a response for the follow-up request before actually sending it - which overinflates numbers. Might be interesting to re-test with a proper HTTP implementation (at least read until the last 4 bytes received are \r\n\r\n, and assume the benchmark client will never send a body).
What are some alternatives?
ntex - framework for composable networking services
openonload - git import of openonload.org https://gist.github.com/majek/ae188ae72e63470652c9
just - the only javascript runtime to hit no.1 on techempower :fire:
liburing
onload - OpenOnload high performance user-level network stack
epoll-server - C code for multithreaded multiplexing client socket connections across multiple threads (so its X connections per thread) uses epoll
libreactor - Extendable event driven high performance C-abstractions
nanos - A kernel designed to run one and only one application in a virtualized environment
websrv - A simple C web service and REST framework