zstd-wasm
compression-dictionary-transport | zstd-wasm | |
---|---|---|
7 | 1 | |
90 | 99 | |
- | - | |
5.2 | 5.7 | |
2 months ago | 6 days ago | |
TypeScript | ||
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
compression-dictionary-transport
-
Compression efficiency with shared dictionaries in Chrome
> Dictionary entries (or at least the metadata) should be cleared any time cookies are cleared.
So it seems it should not get you anything you cannot already do with cookies.
https://github.com/WICG/compression-dictionary-transport?tab...
-
Chrome feature: Compression dictionary transport with Shared Brotli
Talked about here:
https://github.com/WICG/compression-dictionary-transport
- Compression Dictionary Transport
-
Improving compression with a preset DEFLATE dictionary (2015)
There's a spec up for custom dictionary support across the web. https://github.com/WICG/compression-dictionary-transport
This was one of the major blockers that iirc Mozilla threw in the way of zstd compression support: they said zstd with a standardly accepted dictionary would be too particular & wanted more. With this spec maybe Moz will accept zstd as a web compression standard.
-
JavaScript import maps are now supported cross-browser
Here here. Today, bundlers may get you to first page load faster. But if a user comes back and you've shipped two small fixes, all those extra wins you get from compressing a bunch files at once fly out the window & you're deep in the red. If you have users that return to your site, and your site is actively developed, bundling is probably a bad tradeoff.
We see similar fixedness in the field all over the place: people freaking love small Docker image sizes & will spend forever making it smaller. But my gosh the number of engineers I've seen fixate on total download size for an image, & ignore everything else, is vast. Same story, but server side: my interest is in the download size for what v1.0.1 of the Docker container looks like once we already have v1.0.0 already shipped. Once we start to consider what the ongoing experience is, rather than just the first time easy-to-judge metric, the pictures all look very different.
Then there's the other thing. The performance reasons for bundling are being eaten away. Preload & Early Hints are both here today & both offer really good tools to greatly streamline asset loading & claw back a lot of turf, and work hand-in-glove with import-maps. The remaining thing everyone points out is that a large bundle compresses better (but again at the cost of making incremental updates bad). The spec is in progress, but compression-dictionary-transport could potentially obliterate that advantage, either make it a non-factor, or perhaps even a disadvantage for large bundles (as one could use a set of dictionaries & go discover which of your handful of dictionaries best compress the code). These dictionaries would again be first-load hit, but could then be used again and again by users, to great effect again for incremental changes. https://github.com/WICG/compression-dictionary-transport
Bundles are such an ugly stain on the web, such an awful hack that betrays the web's better resourceful nature. Thankfully we're finally making real strides against this opaque awful blob we've foisted upon this world. And we can start to undo not just the ugliness, but the terrible performance pains we've created by bundling so much togther.
zstd-wasm
-
Compression efficiency with shared dictionaries in Chrome
That would be an excellent web standard!!
There's wasm modules that do similar but having it bakes into the browser could allow for further optimization than what's possible with wasm. https://github.com/bokuweb/zstd-wasm
I have no idea if it's possible but I wonder if a webgpu port could be made? Alternatively, for your use case, maybe you could try applying something like Basis Universal; a fast compression system for textures, that it seems there are some webgpu loaders for... Maybe that could be bent to encoding/deciding text?
What are some alternatives?
download-esm - Download ESM modules from npm and jsdelivr
sciter-js-sdk
import-maps - How to control the behavior of JavaScript imports
quickjspp
webappsec-subresource-integrity - WebAppSec Subresource Integrity
simpatico - Simpatico is an umbrella term for several data-structures and algorithms written in JavaScript
proposal-type-annotations - ECMAScript proposal for type syntax that is erased - Stage 1
lit - Lit is a simple library for building fast, lightweight web components.
esm.sh - A fast, smart, & global CDN for modern(es2015+) web development.
JSLint - JSLint, The JavaScript Code Quality and Coverage Tool
esbuild - An extremely fast bundler for the web