indexed_gzip VS rapidgzip

Compare indexed_gzip vs rapidgzip and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
indexed_gzip rapidgzip
2 14
93 320
- -
8.3 9.5
6 months ago 10 days ago
C C++
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

indexed_gzip

Posts with mentions or reviews of indexed_gzip. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-10.
  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  • Is there any windows archival software (free or paid) that can browse tar.gz files without extracting the whole tarball?
    2 projects | /r/DataHoarder | 10 Dec 2021
    The pieces are there. https://github.com/devsnd/tarindexer/blob/master/tarindexer.py is a prototype of indexing and seeking a tar file in python. https://github.com/pauldmccarthy/indexed_gzip allows indexing and seeking a gzip file. If those pieces of code were combined it could give you efficient targeted file extraction, but you'd need to find a coder with enough time and motivation to fuss with it.

rapidgzip

Posts with mentions or reviews of rapidgzip. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Ebiggers/libdeflate: Heavily optimized DEFLATE/zlib/gzip library
    5 projects | news.ycombinator.com | 26 Aug 2023
    I also did benchmarks with zlib and libarchivemount via their library interface here [0]. It has been a while that I have run them, so I forgot. Unfortunately, I did not add libdeflate.

    [0] https://github.com/mxmlnkn/rapidgzip/blob/master/src/benchma...

  • Rapidgzip – Parallel Decompression and Seeking in Gzip (Knespel, Brunst – 2023) [pdf]
    3 projects | news.ycombinator.com | 21 Aug 2023
    Hi, author here.

    You are right in the index being the easy-mode. Over the years there have been lots of implementations trying to add an index like that to the gzip metadata itself or as a sidecar file, with bgzip probably being the most known one. None of them really did stick, hence the necessity for some generic multi-threaded decompressor. A probably incomplete list of such implementations can be found in this issue: https://github.com/mxmlnkn/rapidgzip/issues/8

    The index makes it so easy that I can simply delegate decompression to zlib. And since paper publication I've actually improved upon this by delegating to ISA-l / igzip instead, which is twice as fast. This is already in the 0.8.0 release.

    As derived from table 1, the false positive rate is 1 Tbit / 202 = 5 Gbit or 625 MB for deflate blocks with dynamic Huffman code. For non-compressed blocks, the false positive rate is roughly one per 500 KB, however non-compressed blocks can basically be memcpied or skipped over and then the next deflate header can be checked without much latency. On the other hand, for dynamic blocks, the whole block needs to be decompressed first to find the next one. So the much higher false positive rate for non-compressed blocks doesn't introduce that much overhead.

    I have some profiling built into rapidgzip, which is printed with -v, e.g., rapidgzip -v -d -o /dev/null 20xsilesia.tar.gz :

        Time spent in block finder              : 0.227751 s
  • Intel QuickAssist Technology Zstandard Plugin for Zstandard
    10 projects | news.ycombinator.com | 16 Aug 2023
  • Tool and Library for Parallel Gzip Decompression and Random Access
    1 project | news.ycombinator.com | 12 May 2023
  • Pigz: Parallel gzip for modern multi-processor, multi-core machines
    15 projects | news.ycombinator.com | 12 May 2023
    I have not only implemented parallel decompression but also random access to offsets in the stream with https://github.com/mxmlnkn/pragzip I did some benchmarks on some really beefy machines with 128 cores and was able to reach almost 20 GB/s decompression bandwidth. The single-core decoder has lots of potential for optimization because I had to write it from scratch, though.
  • Parquet: More than just “Turbo CSV”
    7 projects | news.ycombinator.com | 3 Apr 2023
    Decompression of arbitrary gzip files can be parallelized with pragzip: https://github.com/mxmlnkn/pragzip
  • The Cost of Exception Handling
    1 project | news.ycombinator.com | 13 Nov 2022
    At the very least you are duplicating logic without the exception. The check for eof has to be done implicitly anyway inside read because it has to fill the bit buffer with data from the byte buffer or the byte buffer with data from the file. And if both fail, then we already know the result of eof, so no need to duplicate checking for eof in the outer read calling loop.

    Here is the full commit with ad-hoc benchmark results in the commit message:

    https://github.com/mxmlnkn/pragzip/commit/0b1af498377838c30f...

    and here the benchmarks I ran at that time:

    https://github.com/mxmlnkn/pragzip/blob/0b1af498377838c30fea...

    As you can see, it's part of my random-seekable multi-threaded gzip and bzip2 parallel decompression libraries.

    What you can also see in the commit message is that it wasn't a 50% time reduction but a 50% bandwidth increase, which would translate to a 30% time reduction. It seems I remembered that partly wrong. But it still was a significant optimization for me.

  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
  • Show HN: Thread-Parallel Decompression and Random Access to Gzip Files (Pragzip)
    1 project | news.ycombinator.com | 6 Aug 2022

What are some alternatives?

When comparing indexed_gzip and rapidgzip you can also consider the following projects:

tarindexer - python module for indexing tar files for fast access

pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines.

zstd - Zstandard - Fast real-time compression algorithm

DirectStorage - DirectStorage for Windows is an API that allows game developers to unlock the full potential of high speed NVMe drives for loading game assets.

isa-l - Intelligent Storage Acceleration Library

QATzip - Compression Library accelerated by Intel® QuickAssist Technology

libslz - Stateless, zlib-compatible, and very fast compression library -- http://libslz.org

parquet-format - Apache Parquet

indexed_zstd - A bridge for libzstd-seek to python. Based on mxmlnkn/indexed_bzip2

nvcomp - Repository for nvCOMP docs and examples. nvCOMP is a library for fast lossless compression/decompression on the GPU that can be downloaded from https://developer.nvidia.com/nvcomp.

ratarmount - Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives

pixz - Parallel, indexed xz compressor