picohttpparser VS seastar

Compare picohttpparser vs seastar and see what are their differences.

picohttpparser

tiny HTTP parser written in C (used in HTTP::Parser::XS et al.) (by h2o)

seastar

High performance server-side application framework (by talawahtech)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
picohttpparser seastar
3 1
1,785 7
0.7% -
4.2 0.0
2 months ago almost 2 years ago
C C++
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

picohttpparser

Posts with mentions or reviews of picohttpparser. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-18.
  • Ask HN: Resources for Building a Webserver in C?
    10 projects | news.ycombinator.com | 18 Dec 2022
  • Linux Kernel vs. DPDK: HTTP Performance Showdown
    5 projects | news.ycombinator.com | 4 Jul 2022
    Yea, it is definitely a fake HTTP server which I acknowledge in the article [1]. However based on the size of the requests, and my observation of the number of packets per second being symmetrical at the network interface level, I didn't have a concern about doubled responses.

    Skipping the parsing of the HTTP requests definitely gives a performance boost, but for this comparison both sides got the same boost, so I didn't mind being less strict. Seastar's HTTP parser was being finicky, so I chose the easy route and just removed it from the equation.

    For reference though, in my previous post[2] libreactor was able to hit 1.2M req/s while fully parsing the HTTP requests using picohttpparser[3]. But that is still a very simple and highly optimized implementation. From what I recall when I played with disabling HTTP parsing in libreactor I got a performance boost of about 5%.

    1. https://talawah.io/blog/linux-kernel-vs-dpdk-http-performanc...

    2. https://talawah.io/blog/extreme-http-performance-tuning-one-...

    3. https://github.com/h2o/picohttpparser

  • JS faster than Rust?
    3 projects | /r/rust | 24 Feb 2021
    Just-js is not nodejs framework. It's sperate runtime and most of the http code is written using c/c++ (for example headers parsing logic is written using c and is using https://github.com/h2o/picohttpparser which is c library)

seastar

Posts with mentions or reviews of seastar. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-04.
  • Linux Kernel vs. DPDK: HTTP Performance Showdown
    5 projects | news.ycombinator.com | 4 Jul 2022
    Hi talawahtech. Thanks for the exhaustive article.

    I took a short look at the benchmark setup (https://github.com/talawahtech/seastar/blob/http-performance...), and wonder if some simplifications there lead to overinflated performance numbers. The server here executes a single read() on the connection - and as soon as it receives any data it sends back headers. A real world HTTP server needs to read data until all header and body data is consumed before responding.

    Now given the benchmark probably sends tiny requests, the server might get everything in a single buffer. However every time it does not, the server will send back two responses to the server - and at that time the client will already have a response for the follow-up request before actually sending it - which overinflates numbers. Might be interesting to re-test with a proper HTTP implementation (at least read until the last 4 bytes received are \r\n\r\n, and assume the benchmark client will never send a body).

What are some alternatives?

When comparing picohttpparser and seastar you can also consider the following projects:

ntex - framework for composable networking services

openonload - git import of openonload.org https://gist.github.com/majek/ae188ae72e63470652c9

just - the only javascript runtime to hit no.1 on techempower :fire:

liburing

onload - OpenOnload high performance user-level network stack

epoll-server - C code for multithreaded multiplexing client socket connections across multiple threads (so its X connections per thread) uses epoll

libreactor - Extendable event driven high performance C-abstractions

nanos - A kernel designed to run one and only one application in a virtualized environment

websrv - A simple C web service and REST framework