zlib-ng
brotli
zlib-ng | brotli | |
---|---|---|
19 | 33 | |
1,759 | 14,058 | |
1.3% | 0.9% | |
9.1 | 7.1 | |
11 days ago | 20 days ago | |
C | TypeScript | |
zlib License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zlib-ng
-
Zlib-rs is faster than C
I'm not sure why people say this about certain languages (it is sometimes said about Haskell, as well).
The code has a C style to it, but that doesn't mean it wasn't actually written in Rust -- Rust deliberately has features to support writing this kind of code, in concert with safer, stricter code. This isn't bad, it's good. Imagine if we applied this standard to C code. "Zlib-NG is basically written in assembler, not C..." https://github.com/zlib-ng/zlib-ng/blob/50e9ca06e29867a9014e...
- zlib-ng: zlib replacement with optimizations for "next generation" systems
-
Fast-PNG: PNG image decoder and encoder
Looks like it depends on https://github.com/nodeca/pako for the zlib compression.
> Almost as fast in modern JS engines as C implementation (see benchmarks).
Impressive, although zlib itself is no longer the bar to beat for zlib, I think that goes to https://github.com/zlib-ng/zlib-ng these days
-
Discord Reduced WebSocket Traffic by 40%
For what it’s worth, the benchmark on the Zstandard homepage[1] shows none of the compressors tested breaking 1GB/s on compression, and only the fastest and sloppiest ones breaking 1GB/s on decompression. If you’re OK with its API limitations, libdeflate[2] is known to squeeze past 1GB/s decompressing normal Deflate compression levels. So asking for multiple GB/s is probably unfair.
Still, 10MB/s sounds like the absolute minimum reasonable speed, and they’re reporting nearly three orders of magnitude below that. A modern compressor does not run at bad dialup speeds; something in there is absolutely murdering the performance.
And it might just be the constant-time overhead, as far as I can see. The article mentions “a few hundred bytes” of payload, and the discussion of measurements implies 1.5KB uncompressed. Even though they don’t reinitialize the compressor on each message, that is still a very very modest amount of data.
So it might be that general-purpose compressors are just a bad tool here from a performance standpoint. I’m not aware of a good tool for this kind of application, though.
[1] https://facebook.github.io/zstd/#benchmarks
[2] https://github.com/zlib-ng/zlib-ng/issues/1486
-
Show HN: Pzip- blazing fast concurrent zip archiver and extractor
Please note that allowing for 2% bigger resulting file could mean huge speedup in these circumstances even with the same compression routines, seeing these benchmarks of zlib and zlib-ng for different compression levels:
https://github.com/zlib-ng/zlib-ng/discussions/871
IMO the fair comparison of the real speed improvement brought by a new program is only between the almost identical resulting compressed sizes.
- Intel QuickAssist Technology Zstandard Plugin for Zstandard
-
Introducing zune-inflate: The fastest Rust implementation of gzip/Zlib/DEFLATE
It is much faster than miniz_oxide and all other safe-Rust implementations, and consistently beats even Zlib. The performance is roughly on par with zlib-ng - sometimes faster, sometimes slower. It is not (yet) as fast as the original libdeflate in C.
-
Zlib Critical Vulnerability
Zlib-ng doesn't contain the same code, but it appears that their equivalent inflate() when used with their inflateGetHeader() implementation was affected by a similar problem: https://github.com/zlib-ng/zlib-ng/pull/1328
Also similarly, most client code will be unaffected because `state->head` will be NULL, because they (most client code) won't have used inflateGetHeader() at all.
-
Git’s database internals II: commit history queries
I wonder if zlib-ng would make a difference, since it has a lot of optimizations for modern hardware.
https://github.com/zlib-ng/zlib-ng/discussions/871
-
Computing Adler32 Checksums at 41 GB/s
zlib-ng also has adler32 implementations optimized for various architectures: https://github.com/zlib-ng/zlib-ng
Might be interesting to benchmark their implementation too to see how it compares.
brotli
-
Dealing With Web Fonts
Modern web format is woff (Web Open Font Format) with ~97% browser support. Version 2 is using Brotli compression and is ~20% - 50% more efficient.
-
A Career Ending Mistake
Projects like Brotli aren't built to maximize personal profit; they're driven by passion and a genuine love for software engineering.
It's clear that the industry is shifting from being geeky and nerdy to being more business and management focused.
[0] https://github.com/google/brotli
-
Building an Efficient Text Compression Algorithm Inspired by Silicon Valley’s Pied Piper
Brotli is a compression algorithm developed by Google, particularly effective for text and web compression. It uses a combination of LZ77 (Lempel-Ziv 77), Huffman coding, and 2nd order context modeling. In comparison to traditional algorithms like Gzip, Brotli can achieve smaller compressed sizes, especially for HTML and text-heavy content. This makes it a good candidate for our Pied Piper-inspired text compression implementation.
-
Compression Dictionary Transport
The one example I can think of with a pre-seeded dictionary (for web, no less) is Brotli.
https://datatracker.ietf.org/doc/html/rfc7932#appendix-A
You can more or less see what it looks like (per an older commit): https://github.com/google/brotli/blob/5692e422da6af1e991f918...
Certainly it performs better than gzip by itself.
Some historical discussion: https://news.ycombinator.com/item?id=19678985
-
WebP: The WebPage Compression Format
I believe the compression dictionary refers to [1], which is used to quickly match dictionary-compressable byte sequences. I don't know where 170 KB comes from, but that hash alone does take 128 KiB and might be significant if it can't be easily recomputed. But I'm sure that it can be quickly computed on the loading time if the binary size is that important.
[1] https://github.com/google/brotli/blob/master/c/enc/dictionar...
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
The answer lies in the Accept-Encoding header. In the example above, I just copied it from my browser, so it lists all the compression methods my browser supports: "gzip, deflate, br, zstd". The Wayfair backend supports compression with "br", which is Brotli, and uses it as the most efficient method.
-
LZW and GIF explained
...though with the slightly unexpected side effect (for Brotli, at least) that your executable may end up containing (~200KB, from memory) of very unexpected plain text strings which might (& has[0]) lead to questions from software end-users asking why your software contains "random"[1] text (including potentially "culturally sensitive" words/phrases related to religion such as "Holy Roman Emperor", "Muslims", "dollars", "emacs"[2] or similar).
(I encountered this aspect while investigating potential size optimization opportunities for the Godot game engine's web/WASM builds--though presumably the Brotli dictionary compresses well if the transfer encoding is... Brotli. :D )
[0] "This needs to be reviewed immediately #876": https://github.com/google/brotli/issues/876
[1] Which, regardless of meaning, certainly bears similarities to the type of "unexpected weird text" commonly/normally associated with spam, malware, LLMs and other entities of ill repute.
[2] The final example may not actually be factual. :)
-
Node.js vs Angular: Navigating the Modern Web Development Landscape
Using tools like Brotli, you can boost your application’s load time. You can use the ngUpgrade library to mix AngularJS and Angular components to enhance runtime performance, bringing in hybrid applications that can be used with techniques like ahead-of-time (AOT) compilation, aiding in faster browser rendering.
-
Jpegli: A New JPEG Coding Library
JPEGLI = A small JPEG
The suffix -li is used in Swiss German dialects. It forms a diminutive of the root word, by adding -li to the end of the root word to convey the smallness of the object and to convey a sense of intimacy or endearment.
This obviously comes out of Google Zürich.
Other notable Google projects using Swiss German:
https://github.com/google/gipfeli high-speed compression
Gipfeli = Croissant
https://github.com/google/guetzli perceptual JPEG encoder
Guetzli = Cookie
https://github.com/weggli-rs/weggli semantic search tool
Weggli = Bread roll
https://github.com/google/brotli lossless compression
Brötli = Small bread
-
Compression efficiency with shared dictionaries in Chrome
The brotli repo on github has a dictionary generator: https://github.com/google/brotli/blob/master/research/dictio...
I have a hosted version of it on https://use-as-dictionary.com/ to make it easier to experiment with.
What are some alternatives?
ZLib - A massively spiffy yet delicately unobtrusive compression library.
zstd - Zstandard - Fast real-time compression algorithm
LZ4 - Extremely Fast Compression algorithm
libdeflate - Heavily optimized library for DEFLATE/zlib/gzip compression and decompression
LZMA - (Unofficial) Git mirror of LZMA SDK releases