zstd
Zstandard - Fast real-time compression algorithm (by facebook)
LZ4
Extremely Fast Compression algorithm (by lz4)
zstd | LZ4 | |
---|---|---|
120 | 24 | |
25,071 | 10,989 | |
1.2% | 1.2% | |
9.8 | 9.0 | |
8 days ago | 9 days ago | |
C | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zstd
Posts with mentions or reviews of zstd.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2025-03-07.
-
Why do I find Rust inadequate for text compression codecs?
If zstd give you an error and you don't handle it, the next calls may cause UB, so it kinda does both things.
https://github.com/facebook/zstd/blob/b16d193512d3ded82fd584...
- Zstandard v1.5.7 brings performance enhancements
- Zstandard v1.5.7
-
Lzbench Compression Benchmark
( https://github.com/facebook/zstd/releases/tag/v1.5.6 )
In my opinion, it is better to check the original repository: https://github.com/inikep/lzbench
-
DeepSeek releases Janus Pro, a text-to-image generator [pdf]
This. Even their less known work is pretty solid[1] ( used it the other day and was frankly kinda amazed at how well it performed under the circumstances ). Facebook/Meta sucks like most social madia does, but, not unlike Elon Musk, they are on the record of having some contributions to society as a whole.
[1]https://github.com/facebook/zstd
-
New standards for a faster and more private Internet
I don't think so? It's only seekable with an additional index [1], just like any other compression scheme.
[1] https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...
-
Large Text Compression Benchmark
- latest zstd v1.5.6 ( Mar 30, 2024 https://github.com/facebook/zstd/releases )
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
You may have also noticed that a new supported data compression format zstd appeared some time ago. I haven't seen any backends that use it yet, but httpx will support decompression in versions above 0.28.0. I already use it to compress server response dumps in my projects; it shows incredible efficiency in asynchronous solutions with aiofiles.
-
MLow: Meta's low bitrate audio codec
Zstd is a personal project? Surely it's not by accident in the Facebook GitHub organization? And that you need to sign a contract on code.facebook.com before they'll consider merging any contributions? That seems like an odd claim, unless it used to be a personal project and Facebook took it over
(https://github.com/facebook/zstd/blob/dev/CONTRIBUTING.md#co...)
-
My First Arch Linux Installation
Unmount root and remount the subvolumes and the boot partition. noatime is used for better performance zstd as file compression:
LZ4
Posts with mentions or reviews of LZ4.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-21.
- LZ4 v1.10.0 – Multicores Edition
-
Number sizes for LZ77 compression
LZ4 is a bit more complicated, but seems faster: https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md
-
Rsyncing 20TB locally
According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
-
An Intro to Data Compression
The popular NoSQL database Cassandra utilizes a compression algorithm called LZ4 to reduce the footprint of data at rest. LZ4 is characterized by very fast compression speed at the cost of a higher compression ratio. This is a design choice that allows Cassandra to maintain high write throughput while also benefiting from compression in some capacity.
-
Micron Unveils 24GB and 48GB DDR5 Memory Modules | AMD EXPO and Intel XMP 3.0 compatible
Yeah, sure, when you have monster core counts. on regular systems, not so much, here's from their own github page. it achieves, eh, 5GB/s on memory to memory transfers, i.e. best case scenario. so, uh, no? i'm not even sure it's any better than the CPU decompressor one Nvidia used.
- Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
-
zstd
> The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.
lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194
There's also LZ4_HC.
-
Best archival/compression format for whole hard drives
Since nobody mentioned it, I'll add lz4 (https://github.com/lz4/lz4).
What are some alternatives?
When comparing zstd and LZ4 you can also consider the following projects:
haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)
brotli - Brotli compression format
ZLib - A massively spiffy yet delicately unobtrusive compression library.
Snappy - A fast compressor/decompressor