brotli | LZ4 | |
---|---|---|
32 | 24 | |
13,639 | 10,487 | |
0.6% | 1.2% | |
7.4 | 9.3 | |
4 days ago | 4 days ago | |
TypeScript | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
brotli
-
A Career Ending Mistake
Projects like Brotli aren't built to maximize personal profit; they're driven by passion and a genuine love for software engineering.
It's clear that the industry is shifting from being geeky and nerdy to being more business and management focused.
[0] https://github.com/google/brotli
-
Building an Efficient Text Compression Algorithm Inspired by Silicon Valley’s Pied Piper
Brotli is a compression algorithm developed by Google, particularly effective for text and web compression. It uses a combination of LZ77 (Lempel-Ziv 77), Huffman coding, and 2nd order context modeling. In comparison to traditional algorithms like Gzip, Brotli can achieve smaller compressed sizes, especially for HTML and text-heavy content. This makes it a good candidate for our Pied Piper-inspired text compression implementation.
-
Compression Dictionary Transport
The one example I can think of with a pre-seeded dictionary (for web, no less) is Brotli.
https://datatracker.ietf.org/doc/html/rfc7932#appendix-A
You can more or less see what it looks like (per an older commit): https://github.com/google/brotli/blob/5692e422da6af1e991f918...
Certainly it performs better than gzip by itself.
Some historical discussion: https://news.ycombinator.com/item?id=19678985
-
WebP: The WebPage Compression Format
I believe the compression dictionary refers to [1], which is used to quickly match dictionary-compressable byte sequences. I don't know where 170 KB comes from, but that hash alone does take 128 KiB and might be significant if it can't be easily recomputed. But I'm sure that it can be quickly computed on the loading time if the binary size is that important.
[1] https://github.com/google/brotli/blob/master/c/enc/dictionar...
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
The answer lies in the Accept-Encoding header. In the example above, I just copied it from my browser, so it lists all the compression methods my browser supports: "gzip, deflate, br, zstd". The Wayfair backend supports compression with "br", which is Brotli, and uses it as the most efficient method.
-
LZW and GIF explained
...though with the slightly unexpected side effect (for Brotli, at least) that your executable may end up containing (~200KB, from memory) of very unexpected plain text strings which might (& has[0]) lead to questions from software end-users asking why your software contains "random"[1] text (including potentially "culturally sensitive" words/phrases related to religion such as "Holy Roman Emperor", "Muslims", "dollars", "emacs"[2] or similar).
(I encountered this aspect while investigating potential size optimization opportunities for the Godot game engine's web/WASM builds--though presumably the Brotli dictionary compresses well if the transfer encoding is... Brotli. :D )
[0] "This needs to be reviewed immediately #876": https://github.com/google/brotli/issues/876
[1] Which, regardless of meaning, certainly bears similarities to the type of "unexpected weird text" commonly/normally associated with spam, malware, LLMs and other entities of ill repute.
[2] The final example may not actually be factual. :)
-
Node.js vs Angular: Navigating the Modern Web Development Landscape
Using tools like Brotli, you can boost your application’s load time. You can use the ngUpgrade library to mix AngularJS and Angular components to enhance runtime performance, bringing in hybrid applications that can be used with techniques like ahead-of-time (AOT) compilation, aiding in faster browser rendering.
-
Jpegli: A New JPEG Coding Library
JPEGLI = A small JPEG
The suffix -li is used in Swiss German dialects. It forms a diminutive of the root word, by adding -li to the end of the root word to convey the smallness of the object and to convey a sense of intimacy or endearment.
This obviously comes out of Google Zürich.
Other notable Google projects using Swiss German:
https://github.com/google/gipfeli high-speed compression
Gipfeli = Croissant
https://github.com/google/guetzli perceptual JPEG encoder
Guetzli = Cookie
https://github.com/weggli-rs/weggli semantic search tool
Weggli = Bread roll
https://github.com/google/brotli lossless compression
Brötli = Small bread
-
Compression efficiency with shared dictionaries in Chrome
The brotli repo on github has a dictionary generator: https://github.com/google/brotli/blob/master/research/dictio...
I have a hosted version of it on https://use-as-dictionary.com/ to make it easier to experiment with.
-
The Full-Stack development experience
An additional element that we can finally remove from our stack is the minification of JavaScript and CSS files. Thanks to algorithms like brotli (with a very Swiss flavour) we no longer need to minify and compress our files before distributing them. Cloudflare, Nginx, or Apache will take care of everything for us.
LZ4
- LZ4 v1.10.0 – Multicores Edition
-
Number sizes for LZ77 compression
LZ4 is a bit more complicated, but seems faster: https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md
-
Rsyncing 20TB locally
According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
-
An Intro to Data Compression
The popular NoSQL database Cassandra utilizes a compression algorithm called LZ4 to reduce the footprint of data at rest. LZ4 is characterized by very fast compression speed at the cost of a higher compression ratio. This is a design choice that allows Cassandra to maintain high write throughput while also benefiting from compression in some capacity.
-
Micron Unveils 24GB and 48GB DDR5 Memory Modules | AMD EXPO and Intel XMP 3.0 compatible
Yeah, sure, when you have monster core counts. on regular systems, not so much, here's from their own github page. it achieves, eh, 5GB/s on memory to memory transfers, i.e. best case scenario. so, uh, no? i'm not even sure it's any better than the CPU decompressor one Nvidia used.
- Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
-
zstd
> The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.
lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194
There's also LZ4_HC.
-
Best archival/compression format for whole hard drives
Since nobody mentioned it, I'll add lz4 (https://github.com/lz4/lz4).
What are some alternatives?
Snappy - A fast compressor/decompressor
zstd - Zstandard - Fast real-time compression algorithm
LZMA - (Unofficial) Git mirror of LZMA SDK releases
ZLib - A massively spiffy yet delicately unobtrusive compression library.
zlib-ng - zlib replacement with optimizations for "next generation" systems.
LZFSE - LZFSE compression library and command line tool
haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard