TurboBench
pixz
TurboBench | pixz | |
---|---|---|
10 | 8 | |
312 | 684 | |
- | - | |
8.9 | 4.8 | |
9 months ago | about 2 months ago | |
C | C | |
- | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TurboBench
-
Zstd Content-Encoding planned to ship with Chrome 123
I'm still unconvinced about this addition. And I don't even dislike Zstandard.
The main motivation seems to be that while Zstandard is worse than Brotli at the highest level, it's substantially faster than Brotli when data has to be compressed on the fly with a limited computation budget. That might be true, but I'm yet to see any concrete or even anecdotal evidence even in the issue tracker [1] while there exist some benchmarks where both Zstandard and Brotli are fast enough for the web usage even at lower levels [2].
According to their FAQ [3] Meta and Akamai have successfully used Zstandard in their internal network, but my gut feeling is that they never actually tried to optimize Brotli instead. In fact, Meta employs the main author of Zstandard so it would have been easier to tune Zstandard instead of Brotli. While Brotli has some fundamental difference from Zstandard (in particular Brotli doesn't use arithmetic-equivalent coding), no one has concretely demonstrated that difference would prevent Brotli from being fast enough for dynamic contents in my opinion.
[1] https://issues.chromium.org/issues/40196713
[2] https://github.com/powturbo/TurboBench/issues/43
[3] https://docs.google.com/document/d/14dbzMpsYPfkefAJos124uPrl...
- TurboBench: Dynamic/Static web content compression benchmark
-
Ebiggers/libdeflate: Heavily optimized DEFLATE/zlib/gzip library
libdeflate compress better and has faster decompression than igzip.
See the silesia single core in-memory benchmark here [1] comparing zlib,libdeflate,igzip,...
https://github.com/powturbo/TurboBench/issues/4
-
Intel QuickAssist Technology Zstandard Plugin for Zstandard
- https://github.com/powturbo/TurboBench/issues/43
[1] https://github.com/powturbo/TurboBench
-
Variation on RLE to Achieve Lossless Compression for Tabular Data
Compressesing your sample file, we get 823 bytes with brotli
Download TurboBench and make your own tests:
[1] - https://github.com/powturbo/TurboBench
-
Data Compression Drives the Internet. Here’s How It Works
- igzip 1,2 is best for very fast networks > 10MB/s
brotli bring little value at decompression for users
[1] https://github.com/powturbo/TurboBench
[1] https://sites.google.com/site/powturbo/home/web-compression
[2] https://encode.su/threads/2333-TurboBench-Back-to-the-future...
-
Pigz: Parallel gzip for modern multi-processor, multi-core machines
Build or download TurboBench [1] executables for linux and windows from releases [2] ans make your own tests comparing oodle,zstd and other compressors.
[1] https://github.com/powturbo/TurboBench
[2] https://github.com/powturbo/TurboBench/releases
pixz
- pixz: Parallel, Indexed xz Compressor
-
Pigz: Parallel gzip for modern multi-processor, multi-core machines
That's really confusing since `pixz` exists and its "pixie" pronunciation actually works
https://github.com/vasi/pixz
-
Xz format considered inadequate for long-term archiving
pixz (https://github.com/vasi/pixz) is a nice parallel xz that additionally creates an index of tar files so you can decompress individual files. I wonder if dpkg could be extended to do something similar.
-
The best datahoarding hint that changed my live: use RAR archives (or any other archive format, really)
There's pixz, which indexes the tarball, allowing listing/extracting individual paths without decompressing the whole thing.
-
Hop: 25x faster than unzip and 10x faster than tar at reading individual files
Also relevant is pixz [1] which can do parallel LZMA/XZ decompression as well as tar file indexing.
[1] https://github.com/vasi/pixz
-
7-Zip 21.0 alpha introduces native Linux support
Yes, it's as easy as installing pixz with symlinks pointing to xz (I think Debian even does this automatically as part of its post-installation scripts).
-
C Deep
pixz - Parallel, indexed xz compressor. BSD-2-Clause
-
PeaZip 7.7.1 released!
Not quite what you're asking, but if you're a 7-Zip fan and on Linux, you might be interested in pixz.
What are some alternatives?
QAT-ZSTD-Plugin
p7zip - A new p7zip fork with additional codecs and improvements (forked from https://sourceforge.net/projects/sevenzip/ AND https://sourceforge.net/projects/p7zip/).
rapidgzip - Gzip Decompression and Random Access for Modern Multi-Core Machines
notepadqq - A simple, general-purpose editor for Linux
libdeflate - Heavily optimized library for DEFLATE/zlib/gzip compression and decompression
ratarmount - Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives
pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines.
asar - Simple extensive tar-like archive format with indexing
lib842
libarchive - Multi-format archive and compression library
DirectStorage - DirectStorage for Windows is an API that allows game developers to unlock the full potential of high speed NVMe drives for loading game assets.
precomp-cpp - Precomp, C++ version - further compress already compressed files