LZ4 VS zstd

Compare LZ4 vs zstd and see what are their differences.

LZ4

Extremely Fast Compression algorithm (by lz4)

zstd

Zstandard - Fast real-time compression algorithm (by facebook)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
LZ4 zstd
21 111
9,458 22,841
1.5% 1.1%
9.5 9.6
3 days ago 2 days ago
C C
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LZ4

Posts with mentions or reviews of LZ4. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Number sizes for LZ77 compression
    1 project | /r/compression | 30 Apr 2023
    LZ4 is a bit more complicated, but seems faster: https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md
  • Rsyncing 20TB locally
    2 projects | /r/zfs | 21 Mar 2023
    According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
  • An Intro to Data Compression
    1 project | dev.to | 17 Feb 2023
    The popular NoSQL database Cassandra utilizes a compression algorithm called LZ4 to reduce the footprint of data at rest. LZ4 is characterized by very fast compression speed at the cost of a higher compression ratio. This is a design choice that allows Cassandra to maintain high write throughput while also benefiting from compression in some capacity.
  • Micron Unveils 24GB and 48GB DDR5 Memory Modules | AMD EXPO and Intel XMP 3.0 compatible
    1 project | /r/gadgets | 21 Jan 2023
    Yeah, sure, when you have monster core counts. on regular systems, not so much, here's from their own github page. it achieves, eh, 5GB/s on memory to memory transfers, i.e. best case scenario. so, uh, no? i'm not even sure it's any better than the CPU decompressor one Nvidia used.
  • Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
    2 projects | /r/originalxbox | 31 Dec 2022
  • zstd
    8 projects | news.ycombinator.com | 19 Dec 2022
    > The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.

    lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194

    There's also LZ4_HC.

  • Best archival/compression format for whole hard drives
    1 project | /r/DataHoarder | 7 Dec 2022
    Since nobody mentioned it, I'll add lz4 (https://github.com/lz4/lz4).
  • I'm new to this
    2 projects | /r/androidroot | 28 Nov 2022
    Get your bootloader unlocked via Download mode and then obtain your stock firmware, preferably for your current region https://samfw.com (Download mode: CARRIER_CODE). Get the boot image from AP with 7zip, unpack from LZ4 with https://github.com/lz4/lz4/releases (drag and drop), patch with Magisk https://github.com/topjohnwu/magisk/releases/latest, grab the new image, name it "boot.img" and pack it into a .tar with 7zip and flash to AP with odin https://odindownload.com
  • An efficient image format for SDL
    4 projects | /r/gamedev | 28 Sep 2022
    After some investigations and experiments, I found out that it was the PNG compression (well, decompression I should say) that took a while. So I've made some experiments using the LZ4 compression library, which is focused on decompression speed, and it turned out to be an excellent solution!
  • how to root Samsung galaxy note 10 plus 5g(SM-N976B
    1 project | /r/androidroot | 21 Jul 2022
    Root with magisk: whether you use OneUI ≤3 or 4, patch the specific image needed for it (pre 4: boot, after 4: recovery) and flash it to the device. Boot it and enjoy root. https://github.com/lz4/lz4/releases can help extracting it from the AP tarball.

zstd

Posts with mentions or reviews of zstd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-12.
  • MLow: Meta's low bitrate audio codec
    1 project | news.ycombinator.com | 13 Jun 2024
    Zstd is a personal project? Surely it's not by accident in the Facebook GitHub organization? And that you need to sign a contract on code.facebook.com before they'll consider merging any contributions? That seems like an odd claim, unless it used to be a personal project and Facebook took it over

    (https://github.com/facebook/zstd/blob/dev/CONTRIBUTING.md#co...)

  • My First Arch Linux Installation
    3 projects | dev.to | 12 Jun 2024
    Unmount root and remount the subvolumes and the boot partition. noatime is used for better performance zstd as file compression:
  • Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
    2 projects | news.ycombinator.com | 7 May 2024
    > In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work

    You're not referring to the same dictionary that I am. Look at --train in [1].

    If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).

    If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.

    [1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md

  • Drink Me: (Ab)Using a LLM to Compress Text
    2 projects | news.ycombinator.com | 4 May 2024
    > Doesn't take large amount of GPU resources

    This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...

    My real-world use case for this was JSON files in a particular schema, and the results were fantastic.

  • SQLite VFS for ZSTD seekable format
    2 projects | news.ycombinator.com | 26 Apr 2024
    This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
  • Chrome Feature: ZSTD Content-Encoding
    10 projects | news.ycombinator.com | 1 Apr 2024
    Of course, you may get different results with another dataset.

    gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]

    zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]

    NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.

    NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.

    NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases

    For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.

    xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

    zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

  • Zstandard v1.5.6 – Chrome Edition
    1 project | news.ycombinator.com | 26 Mar 2024
  • Optimizating Rabin-Karp Hashing
    1 project | news.ycombinator.com | 9 Mar 2024
    Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.

    In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.

  • Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
    7 projects | news.ycombinator.com | 17 Feb 2024
  • Cyberpunk 2077 dev release
    1 project | /r/gamedev | 11 Dec 2023
    Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```

What are some alternatives?

When comparing LZ4 and zstd you can also consider the following projects:

Snappy - A fast compressor/decompressor

brotli - Brotli compression format

LZMA - (Unofficial) Git mirror of LZMA SDK releases

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard

ZLib - A massively spiffy yet delicately unobtrusive compression library.

LZFSE - LZFSE compression library and command line tool

haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured

Did you konow that C is
the 7th most popular programming language
based on number of metions?