Our great sponsors
-
Grav
Modern, Crazy Fast, Ridiculously Easy and Amazingly Powerful Flat-File CMS powered by PHP, Markdown, Twig, and Symfony
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
One of my blog articles got on the front page of HN a while ago: https://news.ycombinator.com/item?id=29185971
In total, I saw roughly around 27'000 views, which meant just short of 8 GB of data being transferred and in my case over 500'000 files being requested (given all of the CSS files, images, JavaScript etc.).
Now, the blog held out fine, because it was based on Grav, which means that it ends up being a bunch of flat files: https://getgrav.org/
The blog actually held up pretty fine and didn't go down, which is especially interesting when you consider that I had capped the container that it was running in to 512M of RAM at max and also 0.75 CPU cores, just so it wouldn't slow down the entire node (can't really afford to have a separate server for it).
So in essence, I think that static files can be served really well with limited resources, but once you throw complicated PHP apps (think WordPress), insufficient caching and also database access (especially with any sub-optimally written code) as well as perhaps even something like mod_php instead of fpm, things can indeed go wrong.
I've seen enterprise projects struggle with 100 RPM due to exceedingly poorly written data fetching, N+1 problems and developers not knowing how to avoid running into issues like that or outright not caring because of the system being an internal one and the infrastructure having resources to waste.