rapidgzip VS ratarmount

Compare rapidgzip vs ratarmount and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
rapidgzip ratarmount
14 10
320 637
- -
9.5 9.1
11 days ago 10 days ago
C++ Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rapidgzip

Posts with mentions or reviews of rapidgzip. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Ebiggers/libdeflate: Heavily optimized DEFLATE/zlib/gzip library
    5 projects | news.ycombinator.com | 26 Aug 2023
    I also did benchmarks with zlib and libarchivemount via their library interface here [0]. It has been a while that I have run them, so I forgot. Unfortunately, I did not add libdeflate.

    [0] https://github.com/mxmlnkn/rapidgzip/blob/master/src/benchma...

  • Rapidgzip – Parallel Decompression and Seeking in Gzip (Knespel, Brunst – 2023) [pdf]
    3 projects | news.ycombinator.com | 21 Aug 2023
    Hi, author here.

    You are right in the index being the easy-mode. Over the years there have been lots of implementations trying to add an index like that to the gzip metadata itself or as a sidecar file, with bgzip probably being the most known one. None of them really did stick, hence the necessity for some generic multi-threaded decompressor. A probably incomplete list of such implementations can be found in this issue: https://github.com/mxmlnkn/rapidgzip/issues/8

    The index makes it so easy that I can simply delegate decompression to zlib. And since paper publication I've actually improved upon this by delegating to ISA-l / igzip instead, which is twice as fast. This is already in the 0.8.0 release.

    As derived from table 1, the false positive rate is 1 Tbit / 202 = 5 Gbit or 625 MB for deflate blocks with dynamic Huffman code. For non-compressed blocks, the false positive rate is roughly one per 500 KB, however non-compressed blocks can basically be memcpied or skipped over and then the next deflate header can be checked without much latency. On the other hand, for dynamic blocks, the whole block needs to be decompressed first to find the next one. So the much higher false positive rate for non-compressed blocks doesn't introduce that much overhead.

    I have some profiling built into rapidgzip, which is printed with -v, e.g., rapidgzip -v -d -o /dev/null 20xsilesia.tar.gz :

        Time spent in block finder              : 0.227751 s
  • Intel QuickAssist Technology Zstandard Plugin for Zstandard
    10 projects | news.ycombinator.com | 16 Aug 2023
  • Tool and Library for Parallel Gzip Decompression and Random Access
    1 project | news.ycombinator.com | 12 May 2023
  • Pigz: Parallel gzip for modern multi-processor, multi-core machines
    15 projects | news.ycombinator.com | 12 May 2023
    I have not only implemented parallel decompression but also random access to offsets in the stream with https://github.com/mxmlnkn/pragzip I did some benchmarks on some really beefy machines with 128 cores and was able to reach almost 20 GB/s decompression bandwidth. The single-core decoder has lots of potential for optimization because I had to write it from scratch, though.
  • Parquet: More than just “Turbo CSV”
    7 projects | news.ycombinator.com | 3 Apr 2023
    Decompression of arbitrary gzip files can be parallelized with pragzip: https://github.com/mxmlnkn/pragzip
  • The Cost of Exception Handling
    1 project | news.ycombinator.com | 13 Nov 2022
    At the very least you are duplicating logic without the exception. The check for eof has to be done implicitly anyway inside read because it has to fill the bit buffer with data from the byte buffer or the byte buffer with data from the file. And if both fail, then we already know the result of eof, so no need to duplicate checking for eof in the outer read calling loop.

    Here is the full commit with ad-hoc benchmark results in the commit message:

    https://github.com/mxmlnkn/pragzip/commit/0b1af498377838c30f...

    and here the benchmarks I ran at that time:

    https://github.com/mxmlnkn/pragzip/blob/0b1af498377838c30fea...

    As you can see, it's part of my random-seekable multi-threaded gzip and bzip2 parallel decompression libraries.

    What you can also see in the commit message is that it wasn't a 50% time reduction but a 50% bandwidth increase, which would translate to a 30% time reduction. It seems I remembered that partly wrong. But it still was a significant optimization for me.

  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
  • Show HN: Thread-Parallel Decompression and Random Access to Gzip Files (Pragzip)
    1 project | news.ycombinator.com | 6 Aug 2022

ratarmount

Posts with mentions or reviews of ratarmount. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • Ratarmount: Access large archives as a filesystem efficiently
    1 project | news.ycombinator.com | 10 Apr 2024
  • Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Ratarmount: Random Access Tar Mount
    1 project | news.ycombinator.com | 14 May 2023
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    This is basically the same reason why I started with ratarmount (https://github.com/mxmlnkn/ratarmount) but the focus was more on runtime performance and random access and as the name suggests it started out with access to recursive tar archives. The current version should also work for your use case with recursive zips.
  • Looking for advice uploading data while at uni. I need to split the data i need to upload to carry it with me
    2 projects | /r/DataHoarder | 11 Oct 2022
    As an added complication this would need to work under windows (i need onenote and that's win only :/ ) ; this alone makes the majority of solutions that i came up with impossible. One way could've been splitting the data onto various tar files and then mounting those with rartarmount but...linux only :( .
  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  • Ratarmount – Fast transparent access to archives through FUSE
    2 projects | news.ycombinator.com | 10 Mar 2022
    Or via the experimental AppImage I created this week:

        wget -O ratarmount 'https://github.com/mxmlnkn/ratarmount/releases/download/v0.10.0/ratarmount-manylinux2014_x86_64.AppImage'
  • Hop: 25x faster than unzip and 10x faster than tar at reading individual files
    10 projects | news.ycombinator.com | 10 Nov 2021
    I've recently been looking into this same issue because I analyse a lot of data like sosreports or other tar/compressed data from customer systems. Currently I untar these onto my zfs filesystem which works out OK because it has zstd compression enabled but I end up decompressing and recompressing which is quite expensive as often the files are GBs or more compressed.

    But I've started using a tool called "ratarmount" (https://github.com/mxmlnkn/ratarmount) which creates an index once (and something I could automate our upload system to generate in advance, but you can also just process it lcoally) and then lets you fuse mount the file. This works pretty great with the only exception that I can't create scratch files inside the directory layout which in the past I'd wanted to do.

    I was surprised how hard a problem to solve it is to get a bundle file format that is indexable and compressed with a good and fast compression algorithm which mostly boils down to zstd at this point.

    While it works quite well, especially with gzip and bzip2, sadly the zstd and xz (and some other compression formats) don't allow for decompressing only parts of a file by default, even though it's possible the default tools aren't doing it. The nitty gritty details are summarised here:

  • Is there a way to accelerate extracting .tar contents?
    1 project | /r/linuxquestions | 29 Jun 2021
    Well, you could try to skip extraction and access the tar archive using ratarmount, and stack overlayfs on top to allow writing, but that will have an impact on compilation time.

What are some alternatives?

When comparing rapidgzip and ratarmount you can also consider the following projects:

pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines.

tarindexer - python module for indexing tar files for fast access

DirectStorage - DirectStorage for Windows is an API that allows game developers to unlock the full potential of high speed NVMe drives for loading game assets.

asar - Simple extensive tar-like archive format with indexing

QATzip - Compression Library accelerated by Intel® QuickAssist Technology

PyFilesystem2 - Python's Filesystem abstraction layer

parquet-format - Apache Parquet

pixz - Parallel, indexed xz compressor

nvcomp - Repository for nvCOMP docs and examples. nvCOMP is a library for fast lossless compression/decompression on the GPU that can be downloaded from https://developer.nvidia.com/nvcomp.

InstaPy - 📷 Instagram Bot - Tool for automated Instagram interactions

icoextract - Extract icons from Windows PE files (.exe/.dll)