How Much Faster Is Making a Tar Archive Without Gzip?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  1. zstd

    Zstandard - Fast real-time compression algorithm

    For anyone who wants to try this, zstd -T0 uses all your threads to compress, and https://github.com/facebook/zstd has a lot more description. Brotli, https://github.com/google/brotli, is another modern format with some good features for high compression levels and Content-Encoding support in web browsers. You might also want to play with the compression level (-1 to -11 or more, zstd's --fast=n).

    One reason these modern compressors do better is not any particular mistake made defining DEFLATE in the 90s, but that new algos use a few MB of recently seen data as context instead of 32KB, and do other things impractical in the 90s but reasonable on modern hardware. The new algorithms also contain logs of smart ideas and have fine-tuned implementations, but that core difference seems important to note.

  2. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  3. brotli

    Brotli compression format

    For anyone who wants to try this, zstd -T0 uses all your threads to compress, and https://github.com/facebook/zstd has a lot more description. Brotli, https://github.com/google/brotli, is another modern format with some good features for high compression levels and Content-Encoding support in web browsers. You might also want to play with the compression level (-1 to -11 or more, zstd's --fast=n).

    One reason these modern compressors do better is not any particular mistake made defining DEFLATE in the 90s, but that new algos use a few MB of recently seen data as context instead of 32KB, and do other things impractical in the 90s but reasonable on modern hardware. The new algorithms also contain logs of smart ideas and have fine-tuned implementations, but that core difference seems important to note.

  4. isa-l

    Intelligent Storage Acceleration Library

    igzip (https://github.com/intel/isa-l) is much faster than gzip or pigz when it comes to decompression, 2-3x in my experience. There is also a Python module (isal) that provides a GzipFile-like wrapper class, for an easy speed-up of Python scripts that read gzipped files.

    However, it only supports up to level 3 when compressing data, so it can't be used as a drop-in replacement for gzip. You also need to make sure to use the latest version if you are going to use it in the context of bioinformatics, since older versions choke on concatenated gzip files common in that field.

  5. rapidgzip

    Gzip Decompression and Random Access for Modern Multi-Core Machines

  6. libslz

    Stateless, zlib-compatible, and very fast compression library -- http://libslz.org

  7. indexed_gzip

    Fast random access of gzip files in Python

    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  8. indexed_zstd

    A bridge for libzstd-seek to python. Based on mxmlnkn/indexed_bzip2

    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  9. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  10. ratarmount

    Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives

    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Ratarmount: Random Access Tar Mount

    1 project | news.ycombinator.com | 14 May 2023
  • Show HN: Ratarmount 1.0.0 – Rapid access to large archives via a FUSE filesystem

    2 projects | news.ycombinator.com | 1 Nov 2024
  • Ask HN: A better Criu Alternative for decompression software / Erlang?

    1 project | news.ycombinator.com | 15 Sep 2024
  • Ratarmount: Access large archives as a filesystem efficiently

    1 project | news.ycombinator.com | 10 Apr 2024
  • Ratarmount – Fast transparent access to archives through FUSE

    2 projects | news.ycombinator.com | 10 Mar 2022

Did you know that C is
the 6th most popular programming language
based on number of references?