rsync VS zstd

Compare rsync vs zstd and see what are their differences.

rsync

An open source utility that provides fast incremental file transfer. It also has useful features for backup and restore operations among many other use cases. (by RsyncProject)

zstd

Zstandard - Fast real-time compression algorithm (by facebook)
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
rsync zstd
41 115
2,767 23,504
4.4% 1.7%
7.1 9.5
3 months ago 2 days ago
C C
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rsync

Posts with mentions or reviews of rsync. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-09-06.

zstd

Posts with mentions or reviews of zstd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-09-26.
  • New standards for a faster and more private Internet
    2 projects | news.ycombinator.com | 26 Sep 2024
    I don't think so? It's only seekable with an additional index [1], just like any other compression scheme.

    [1] https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...

  • Large Text Compression Benchmark
    1 project | news.ycombinator.com | 20 Sep 2024
    - latest zstd v1.5.6 ( Mar 30, 2024 https://github.com/facebook/zstd/releases )
  • Current problems and mistakes of web scraping in Python and tricks to solve them!
    21 projects | dev.to | 22 Aug 2024
    You may have also noticed that a new supported data compression format zstd appeared some time ago. I haven't seen any backends that use it yet, but httpx will support decompression in versions above 0.28.0. I already use it to compress server response dumps in my projects; it shows incredible efficiency in asynchronous solutions with aiofiles.
  • MLow: Meta's low bitrate audio codec
    1 project | news.ycombinator.com | 13 Jun 2024
    Zstd is a personal project? Surely it's not by accident in the Facebook GitHub organization? And that you need to sign a contract on code.facebook.com before they'll consider merging any contributions? That seems like an odd claim, unless it used to be a personal project and Facebook took it over

    (https://github.com/facebook/zstd/blob/dev/CONTRIBUTING.md#co...)

  • My First Arch Linux Installation
    3 projects | dev.to | 12 Jun 2024
    Unmount root and remount the subvolumes and the boot partition. noatime is used for better performance zstd as file compression:
  • Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
    2 projects | news.ycombinator.com | 7 May 2024
    > In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work

    You're not referring to the same dictionary that I am. Look at --train in [1].

    If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).

    If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.

    [1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md

  • Drink Me: (Ab)Using a LLM to Compress Text
    2 projects | news.ycombinator.com | 4 May 2024
    > Doesn't take large amount of GPU resources

    This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...

    My real-world use case for this was JSON files in a particular schema, and the results were fantastic.

  • SQLite VFS for ZSTD seekable format
    2 projects | news.ycombinator.com | 26 Apr 2024
    This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
  • Chrome Feature: ZSTD Content-Encoding
    10 projects | news.ycombinator.com | 1 Apr 2024
    Of course, you may get different results with another dataset.

    gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]

    zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]

    NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.

    NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.

    NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases

    For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.

    xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

    zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

  • Zstandard v1.5.6 – Chrome Edition
    1 project | news.ycombinator.com | 26 Mar 2024

What are some alternatives?

When comparing rsync and zstd you can also consider the following projects:

rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files

LZ4 - Extremely Fast Compression algorithm

syncthing-android - Wrapper of syncthing for Android.

Snappy - A fast compressor/decompressor

libcurl - A command line tool and library for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. libcurl offers a myriad of powerful features

LZMA - (Unofficial) Git mirror of LZMA SDK releases

dupeguru - Find duplicate files

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard

rmlint - Extremely fast tool to remove duplicates and other lint from your filesystem

ZLib - A massively spiffy yet delicately unobtrusive compression library.

tailscale - The easiest, most secure way to use WireGuard and 2FA.

brotli - Brotli compression format

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured