python-zstandard VS ratarmount

Compare python-zstandard vs ratarmount and see what are their differences.

python-zstandard

Python bindings to the Zstandard (zstd) compression library (by indygreg)

ratarmount

Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives (by mxmlnkn)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
python-zstandard ratarmount
1 10
464 634
- -
7.3 9.1
30 days ago 5 days ago
C Python
BSD 3-clause "New" or "Revised" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

python-zstandard

Posts with mentions or reviews of python-zstandard. We have used some of these posts to build our list of alternatives and similar projects.
  • I'm trying to compress data with Python and Zstd library but can't figure out what I'm doing wrong. Any Help?
    1 project | /r/learnpython | 31 Jan 2022
    I'm working on file formats called tif file which are on average a gigabyte of size. A middle step is to compress those files and I looked it up and found this and tried to use it. But unfortunately, the size of the compressed file is barely 100MB less than the original one and it takes like 1.5 minutes to compress it. I'm reading the entire file in memory instead of streaming it but I don't really think that could result in compression size. I'm also using high enough level of compression(20 level) but it still won't give me good results.

ratarmount

Posts with mentions or reviews of ratarmount. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • Ratarmount: Access large archives as a filesystem efficiently
    1 project | news.ycombinator.com | 10 Apr 2024
  • Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Ratarmount: Random Access Tar Mount
    1 project | news.ycombinator.com | 14 May 2023
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    This is basically the same reason why I started with ratarmount (https://github.com/mxmlnkn/ratarmount) but the focus was more on runtime performance and random access and as the name suggests it started out with access to recursive tar archives. The current version should also work for your use case with recursive zips.
  • Looking for advice uploading data while at uni. I need to split the data i need to upload to carry it with me
    2 projects | /r/DataHoarder | 11 Oct 2022
    As an added complication this would need to work under windows (i need onenote and that's win only :/ ) ; this alone makes the majority of solutions that i came up with impossible. One way could've been splitting the data onto various tar files and then mounting those with rartarmount but...linux only :( .
  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  • Ratarmount – Fast transparent access to archives through FUSE
    2 projects | news.ycombinator.com | 10 Mar 2022
    Or via the experimental AppImage I created this week:

        wget -O ratarmount 'https://github.com/mxmlnkn/ratarmount/releases/download/v0.10.0/ratarmount-manylinux2014_x86_64.AppImage'
  • Hop: 25x faster than unzip and 10x faster than tar at reading individual files
    10 projects | news.ycombinator.com | 10 Nov 2021
    I've recently been looking into this same issue because I analyse a lot of data like sosreports or other tar/compressed data from customer systems. Currently I untar these onto my zfs filesystem which works out OK because it has zstd compression enabled but I end up decompressing and recompressing which is quite expensive as often the files are GBs or more compressed.

    But I've started using a tool called "ratarmount" (https://github.com/mxmlnkn/ratarmount) which creates an index once (and something I could automate our upload system to generate in advance, but you can also just process it lcoally) and then lets you fuse mount the file. This works pretty great with the only exception that I can't create scratch files inside the directory layout which in the past I'd wanted to do.

    I was surprised how hard a problem to solve it is to get a bundle file format that is indexable and compressed with a good and fast compression algorithm which mostly boils down to zstd at this point.

    While it works quite well, especially with gzip and bzip2, sadly the zstd and xz (and some other compression formats) don't allow for decompressing only parts of a file by default, even though it's possible the default tools aren't doing it. The nitty gritty details are summarised here:

  • Is there a way to accelerate extracting .tar contents?
    1 project | /r/linuxquestions | 29 Jun 2021
    Well, you could try to skip extraction and access the tar archive using ratarmount, and stack overlayfs on top to allow writing, but that will have an impact on compilation time.

What are some alternatives?

When comparing python-zstandard and ratarmount you can also consider the following projects:

lizard - Lizard (formerly LZ5) is an efficient compressor with very fast decompression. It achieves compression ratio that is comparable to zip/zlib and zstd/brotli (at low and medium compression levels) at decompression speed of 1000 MB/s and faster.

tarindexer - python module for indexing tar files for fast access

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard

asar - Simple extensive tar-like archive format with indexing

PyFilesystem2 - Python's Filesystem abstraction layer

pixz - Parallel, indexed xz compressor

InstaPy - 📷 Instagram Bot - Tool for automated Instagram interactions

icoextract - Extract icons from Windows PE files (.exe/.dll)

ghidra - Ghidra is a software reverse engineering (SRE) framework

exhibitor - Snappy and delightful React component workshop

ouch - Painless compression and decompression in the terminal

rapidgzip - Gzip Decompression and Random Access for Modern Multi-Core Machines