-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
No, parsing HTTP/1.x is a nightmare and definitely not simple. It wasn't even particularly well defined until 2014 when the original RFCs were modernized, and even now there are bugs reported in HTTP parsers all the time.
Node.js came out in 2009, a full ten years after HTTP/1.1 (RFC 2068) and it's original http-parser is full-on spaghetti code, doesn't conform to the RFCs for performance reasons, and is considered unmaintainable by the author of it's replacement[0]
[0] https://github.com/nodejs/llhttp
I'm the author of the fastest open source HTTP server. Parsing HTTP 0.9, 1.0, and 1.1 is trivial. It's a walk in the park. It only takes about a hundred lines of code to create a proper O(n) parser. https://github.com/jart/cosmopolitan/blob/0b317523a0875d83d6...
The Joyent HTTP parser is very good but it's implemented in a way that makes the problem much more complicated than it needs to be. The biggest obstacle with high-performance HTTP message parsing is the case-insensitive string comparison of header field names. Joyent'
For clients, browser-compatible HTTP/1 implementation is a whole another bag of problems.
For example, Content-Length isn't just a single header with an integer, like the spec says. You need to support responses with multiple Content-Length headers, and Content-Length with comma-separated list of lengths. Getting this wrong will make your client hang, consume garbage, or allow response stuffing.
https://github.com/web-platform-tests/wpt/pull/10548/files
This problem does not exist in HTTP/2.