ratarmount
tarindexer
ratarmount | tarindexer | |
---|---|---|
10 | 3 | |
637 | 69 | |
- | - | |
9.1 | 10.0 | |
10 days ago | almost 9 years ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ratarmount
- Ratarmount: Access large archives as a filesystem efficiently
- Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
- Ratarmount: Random Access Tar Mount
-
Ask HN: Most interesting tech you built for just yourself?
This is basically the same reason why I started with ratarmount (https://github.com/mxmlnkn/ratarmount) but the focus was more on runtime performance and random access and as the name suggests it started out with access to recursive tar archives. The current version should also work for your use case with recursive zips.
-
Looking for advice uploading data while at uni. I need to split the data i need to upload to carry it with me
As an added complication this would need to work under windows (i need onenote and that's win only :/ ) ; this alone makes the majority of solutions that i came up with impossible. One way could've been splitting the data onto various tar files and then mounting those with rartarmount but...linux only :( .
-
How Much Faster Is Making a Tar Archive Without Gzip?
Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959
indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.
Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.
bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.
For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd
I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount
-
Ratarmount – Fast transparent access to archives through FUSE
Or via the experimental AppImage I created this week:
wget -O ratarmount 'https://github.com/mxmlnkn/ratarmount/releases/download/v0.10.0/ratarmount-manylinux2014_x86_64.AppImage'
-
Hop: 25x faster than unzip and 10x faster than tar at reading individual files
I've recently been looking into this same issue because I analyse a lot of data like sosreports or other tar/compressed data from customer systems. Currently I untar these onto my zfs filesystem which works out OK because it has zstd compression enabled but I end up decompressing and recompressing which is quite expensive as often the files are GBs or more compressed.
But I've started using a tool called "ratarmount" (https://github.com/mxmlnkn/ratarmount) which creates an index once (and something I could automate our upload system to generate in advance, but you can also just process it lcoally) and then lets you fuse mount the file. This works pretty great with the only exception that I can't create scratch files inside the directory layout which in the past I'd wanted to do.
I was surprised how hard a problem to solve it is to get a bundle file format that is indexable and compressed with a good and fast compression algorithm which mostly boils down to zstd at this point.
While it works quite well, especially with gzip and bzip2, sadly the zstd and xz (and some other compression formats) don't allow for decompressing only parts of a file by default, even though it's possible the default tools aren't doing it. The nitty gritty details are summarised here:
-
Is there a way to accelerate extracting .tar contents?
Well, you could try to skip extraction and access the tar archive using ratarmount, and stack overlayfs on top to allow writing, but that will have an impact on compilation time.
tarindexer
-
Zip: How not to design a file format
The bioinformatics community uses block based gzip compression (bgzip) [0]. The gzip standard allows for blocks so, using an additional index file, you can use it to seek to arbitrary locations and uncompress the block.
gzip compression is maybe not optimal now and the block segmentation reduces the efficiency even further.
Though not very standard, there is also a tar indexer program [1] that allows you to create an index on tar files to do the same.
My information is at least a couple years old so things may have changed.
[0] http://www.htslib.org/doc/bgzip.html
[1] https://github.com/devsnd/tarindexer
-
Is there any windows archival software (free or paid) that can browse tar.gz files without extracting the whole tarball?
The pieces are there. https://github.com/devsnd/tarindexer/blob/master/tarindexer.py is a prototype of indexing and seeking a tar file in python. https://github.com/pauldmccarthy/indexed_gzip allows indexing and seeking a gzip file. If those pieces of code were combined it could give you efficient targeted file extraction, but you'd need to find a coder with enough time and motivation to fuss with it.
-
Hop: 25x faster than unzip and 10x faster than tar at reading individual files
There exists a utility called tarindexer [0] that can be used for random access to tar files. An index text file is created (one time) that is used to record the position of the files in the tar archive. Random reads are done by loading the index file and then seeking to the location of the file in question.
For random access to gzip'd files, bgzip [1] can be used. bgzip also uses an index file (one time creation) that is used to record key points for random access.
[0] https://github.com/devsnd/tarindexer
[1] http://www.htslib.org/doc/bgzip.html
What are some alternatives?
asar - Simple extensive tar-like archive format with indexing
hop - Hop Orchestration Platform
PyFilesystem2 - Python's Filesystem abstraction layer
pixz - Parallel, indexed xz compressor
indexed_gzip - Fast random access of gzip files in Python
InstaPy - 📷 Instagram Bot - Tool for automated Instagram interactions
hop
icoextract - Extract icons from Windows PE files (.exe/.dll)
ghidra - Ghidra is a software reverse engineering (SRE) framework
mozilla-central-old - Unofficial import of Mozilla's mozilla-central hg repository using hg-git