zip.js
rapidgzip
zip.js | rapidgzip | |
---|---|---|
5 | 14 | |
3,280 | 317 | |
- | - | |
9.1 | 9.5 | |
4 days ago | 11 days ago | |
JavaScript | C++ | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zip.js
-
Pigz: Parallel gzip for modern multi-processor, multi-core machines
Similarly, if people are interested, I have coded the possibility to compress zip files on several cores in zip.js [1]. The approach is simpler as it consists of compressing the entries in parallel. It still offers a significant performance gain though when compressing multiple files in a zip file, which is often the nominal case.
[1] https://github.com/gildas-lormeau/zip.js
-
Is there an online reader for books from Libgen?
This shouldn't be an issue. There are JS libraries that can decompress zip (e.g. https://gildas-lormeau.github.io/zip.js/). Nowadays even huge C/C++ codebases can be compiled into JS via Emscripten.
- [HELP] Create password protected ZIP with JavaScript Library
-
isoworker - universal multithreading with main-thread dependencies, 6kB
Well, you can build zip.js with fflate if you want to, see https://github.com/gildas-lormeau/zip.js/blob/master/rollup-fflate.config.js. I wasn't saying that zip.js is faster than fflate or any other library. I'm just saying it can compress files in parallel.
- Zip.js v2
rapidgzip
- Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
-
Ebiggers/libdeflate: Heavily optimized DEFLATE/zlib/gzip library
I also did benchmarks with zlib and libarchivemount via their library interface here [0]. It has been a while that I have run them, so I forgot. Unfortunately, I did not add libdeflate.
[0] https://github.com/mxmlnkn/rapidgzip/blob/master/src/benchma...
-
Rapidgzip – Parallel Decompression and Seeking in Gzip (Knespel, Brunst – 2023) [pdf]
Hi, author here.
You are right in the index being the easy-mode. Over the years there have been lots of implementations trying to add an index like that to the gzip metadata itself or as a sidecar file, with bgzip probably being the most known one. None of them really did stick, hence the necessity for some generic multi-threaded decompressor. A probably incomplete list of such implementations can be found in this issue: https://github.com/mxmlnkn/rapidgzip/issues/8
The index makes it so easy that I can simply delegate decompression to zlib. And since paper publication I've actually improved upon this by delegating to ISA-l / igzip instead, which is twice as fast. This is already in the 0.8.0 release.
As derived from table 1, the false positive rate is 1 Tbit / 202 = 5 Gbit or 625 MB for deflate blocks with dynamic Huffman code. For non-compressed blocks, the false positive rate is roughly one per 500 KB, however non-compressed blocks can basically be memcpied or skipped over and then the next deflate header can be checked without much latency. On the other hand, for dynamic blocks, the whole block needs to be decompressed first to find the next one. So the much higher false positive rate for non-compressed blocks doesn't introduce that much overhead.
I have some profiling built into rapidgzip, which is printed with -v, e.g., rapidgzip -v -d -o /dev/null 20xsilesia.tar.gz :
Time spent in block finder : 0.227751 s
- Intel QuickAssist Technology Zstandard Plugin for Zstandard
- Tool and Library for Parallel Gzip Decompression and Random Access
-
Pigz: Parallel gzip for modern multi-processor, multi-core machines
I have not only implemented parallel decompression but also random access to offsets in the stream with https://github.com/mxmlnkn/pragzip I did some benchmarks on some really beefy machines with 128 cores and was able to reach almost 20 GB/s decompression bandwidth. The single-core decoder has lots of potential for optimization because I had to write it from scratch, though.
-
Parquet: More than just “Turbo CSV”
Decompression of arbitrary gzip files can be parallelized with pragzip: https://github.com/mxmlnkn/pragzip
-
The Cost of Exception Handling
At the very least you are duplicating logic without the exception. The check for eof has to be done implicitly anyway inside read because it has to fill the bit buffer with data from the byte buffer or the byte buffer with data from the file. And if both fail, then we already know the result of eof, so no need to duplicate checking for eof in the outer read calling loop.
Here is the full commit with ad-hoc benchmark results in the commit message:
https://github.com/mxmlnkn/pragzip/commit/0b1af498377838c30f...
and here the benchmarks I ran at that time:
https://github.com/mxmlnkn/pragzip/blob/0b1af498377838c30fea...
As you can see, it's part of my random-seekable multi-threaded gzip and bzip2 parallel decompression libraries.
What you can also see in the commit message is that it wasn't a 50% time reduction but a 50% bandwidth increase, which would translate to a 30% time reduction. It seems I remembered that partly wrong. But it still was a significant optimization for me.
- How Much Faster Is Making a Tar Archive Without Gzip?
- Show HN: Thread-Parallel Decompression and Random Access to Gzip Files (Pragzip)
What are some alternatives?
JSZip - Create, read and edit .zip files with Javascript
pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines.
fast-zlib - Shared context synchronous compression
DirectStorage - DirectStorage for Windows is an API that allows game developers to unlock the full potential of high speed NVMe drives for loading game assets.
yazl - yet another zip library for node
QATzip - Compression Library accelerated by Intel® QuickAssist Technology
text-generator - A naive text generator built in JavaScript using Markov chains.
parquet-format - Apache Parquet
tar-transform - extract, transform and re-pack tarball entries in form of stream with very simple api
nvcomp - Repository for nvCOMP docs and examples. nvCOMP is a library for fast lossless compression/decompression on the GPU that can be downloaded from https://developer.nvidia.com/nvcomp.
tar-stream - tar-stream is a streaming tar parser and generator.
pixz - Parallel, indexed xz compressor