LZMA
zstd
Our great sponsors
LZMA | zstd | |
---|---|---|
2 | 53 | |
30 | 16,880 | |
- | 1.3% | |
0.0 | 9.6 | |
over 3 years ago | 2 days ago | |
C++ | C | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LZMA
-
ELI5: How exactly does winrar profit without a major loss with their current business plan?
Their SDK is in the public domain though.
-
Searching Compressed Space
And that links to https://www.7-zip.org/sdk.html , in the download there see the DOC folder for documentation? I really can't figure out how that somehow 'sends you back to python's lzma'...
zstd
-
Zstandard Worked Example
https://github.com/facebook/zstd/tree/dev/doc/educational_de... is a self-contained zstd decoder. I get a 64 KB dynamically linked executable after running "make" in that directory.
$ size harness
> Yikes; half a meg of code!
It's plausible that the lib you checked is the output from the project's default build target (zstd), which "(...) uncludes dictionary builder, benchmark, and supports decompression of legacy zstd formats"
https://github.com/facebook/zstd/tree/dev/programs
The project also provides another build target, zstd-small, which is "CLI optimized for minimal size; no dictionary builder, no benchmark, and no support for legacy zstd formats"
Also, take a look at what exactly is bundled with the binary.
Thanks for the feedback! I've opened an issue to track this [0]
* Levels 1-19 are the "standard" compression levels.
* Levels 20-22 are the "ultra" levels which require --ultra to use on the CLI. They allocate a lot of memory and are very slow.
* Level 0 is the default compression level, which is 3.
* Levels < 0 are the "fast" compression levels. They achieve speed by turning off Huffman compression, and by "accelerating" compression by a factor. Level -1 has acceleration factor 1, -2 has acceleration factor 2, and so on. So the minimum supported negative compression level is -131072, since the maximum acceleration factor is our block size. But in practice, I wouldn't think a negative level lower than -10 or -20 would be all that useful.
The first 4 bytes are the magic number and the last 4 bytes are the checksum [1] which you could always just chop off if you wanted (it's legal to omit the checksum, see the spec). That would get the total overhead down to 5 bytes.
[1]: https://github.com/facebook/zstd/blob/dev/doc/zstd_compressi...
The official cli windows binaries are here: https://github.com/facebook/zstd/releases/
Afaik, they don't need any dependencies.
-
Choose wisely
gzip certainly doesn't help. Switching to zstd should be an easy bit of low-hanging fruit, at least for save times.
-
Casync – A Content-Addressable Data Synchronization Tool
Really wish this was part of official zstd (https://github.com/facebook/zstd/issues/395#issuecomment-535...) and not a contrib / separate tool.
-
how do i open/decompress .zst files?
This.... https://github.com/facebook/zstd
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
Snappy - A fast compressor/decompressor
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
ZLib - A massively spiffy yet delicately unobtrusive compression library.
brotli - Brotli compression format
zfs - OpenZFS on Linux and FreeBSD
LZHAM - Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++
LZFSE - LZFSE compression library and command line tool