LZMAT
zstd
Our great sponsors
LZMAT | zstd | |
---|---|---|
0 | 58 | |
3 | 17,077 | |
- | 1.8% | |
0.0 | 9.6 | |
over 6 years ago | 3 days ago | |
C | C | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LZMAT
We haven't tracked posts mentioning LZMAT yet.
Tracking mentions began in Dec 2020.
zstd
-
PSA: ZSTD compression is broken (very slow), here's how to fix it
Look at the pull request in that bug: https://github.com/facebook/zstd/pull/3165
-
The Bizarre Case of Zstd's Slow Performance on Arch Linux
zstd normally uses timespec_get() for measuring wall-clock time. But that's available only since C11. For older standards, it uses clock(), but that returns CPU time, which is wrong.
-
Lizard – efficient compression with fast decompression
The thing I don't get about zstd is that their own github page shows lz4 is faster at both compression and decompression at the cost of some compression ratio: https://github.com/facebook/zstd
But most people I work with will pick zstd every time even in cases where decompression speed matters the most.
-
Zstandard Worked Example
https://github.com/facebook/zstd/tree/dev/doc/educational_de... is a self-contained zstd decoder. I get a 64 KB dynamically linked executable after running "make" in that directory.
$ size harness
> Yikes; half a meg of code!
It's plausible that the lib you checked is the output from the project's default build target (zstd), which "(...) uncludes dictionary builder, benchmark, and supports decompression of legacy zstd formats"
https://github.com/facebook/zstd/tree/dev/programs
The project also provides another build target, zstd-small, which is "CLI optimized for minimal size; no dictionary builder, no benchmark, and no support for legacy zstd formats"
Also, take a look at what exactly is bundled with the binary.
Thanks for the feedback! I've opened an issue to track this [0]
* Levels 1-19 are the "standard" compression levels.
* Levels 20-22 are the "ultra" levels which require --ultra to use on the CLI. They allocate a lot of memory and are very slow.
* Level 0 is the default compression level, which is 3.
* Levels < 0 are the "fast" compression levels. They achieve speed by turning off Huffman compression, and by "accelerating" compression by a factor. Level -1 has acceleration factor 1, -2 has acceleration factor 2, and so on. So the minimum supported negative compression level is -131072, since the maximum acceleration factor is our block size. But in practice, I wouldn't think a negative level lower than -10 or -20 would be all that useful.
The first 4 bytes are the magic number and the last 4 bytes are the checksum [1] which you could always just chop off if you wanted (it's legal to omit the checksum, see the spec). That would get the total overhead down to 5 bytes.
[1]: https://github.com/facebook/zstd/blob/dev/doc/zstd_compressi...
The official cli windows binaries are here: https://github.com/facebook/zstd/releases/
Afaik, they don't need any dependencies.
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
Snappy - A fast compressor/decompressor
LZMA - (Unofficial) Git mirror of LZMA SDK releases
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
ZLib - A massively spiffy yet delicately unobtrusive compression library.
brotli - Brotli compression format
zfs - OpenZFS on Linux and FreeBSD
LZFSE - LZFSE compression library and command line tool
LZHAM - Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++
zlib-ng - zlib replacement with optimizations for "next generation" systems.
zlib - Cloudflare fork of zlib with massive performance improvements
rsync - An open source utility that provides fast incremental file transfer. It also has useful features for backup and restore operations among many other use cases.