Our great sponsors
-
hashfs
Implementation of io/fs.FS that appends SHA256 hashes to filenames to allow for aggressive HTTP caching.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
xtemplate
A html/template-based hypertext preprocessor and rapid application development web server written in Go. (by infogulch)
An approach like https://github.com/benbjohnson/hashfs allows file names to be updated at runtime to be content hashed. This removes the need for the extra "304 Not Modified" API calls from the client. This content hash based file renaming is usually done using a build step which renames files. For applications where the static file serving and HTTP request processing are done in the same application, this can be done in memory without a build step for file renames.
I am using that approach in my project https://github.com/claceio/clace. It removes the need for a build step while making aggressive static file caching possible.
An approach like https://github.com/benbjohnson/hashfs allows file names to be updated at runtime to be content hashed. This removes the need for the extra "304 Not Modified" API calls from the client. This content hash based file renaming is usually done using a build step which renames files. For applications where the static file serving and HTTP request processing are done in the same application, this can be done in memory without a build step for file renames.
I am using that approach in my project https://github.com/claceio/clace. It removes the need for a build step while making aggressive static file caching possible.
I've also been dissatisfied with http caching not utilizing content hashes enough. If you're using server side templating one issue is that it's not efficient to calculate the hash while you're running the template, it would need to be precalculated to be efficient enough to use.
So I wrote https://github.com/infogulch/xtemplate to scan all assets at startup to precalculate the hash for templates that use it, and if a request comes in with a query parameter ?hash=sha384-xyz and it matches then it gives it a 1 year immutable Cache-Control header automatically. If a file x.ext has a matching x.ext.gz/x.ext.zst/x.ext.br file then (after hashing the content to make sure it matches) client requests that support it are sent a compressed version streamed directly from disk with sendfile2. I call this "Optimal asset serving" (a bit bold perhaps).