repairwheel VS zstd

Compare repairwheel vs zstd and see what are their differences.

zstd

Zstandard - Fast real-time compression algorithm (by facebook)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
repairwheel zstd
3 109
29 22,581
- 0.8%
7.1 9.6
6 days ago 5 days ago
Python C
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

repairwheel

Posts with mentions or reviews of repairwheel. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-22.
  • Ask HN: What rabbit hole(s) did you dive into recently?
    12 projects | news.ycombinator.com | 22 Apr 2024
    I got into cross-compiling Python wheels (e.g., building macos wheels on linux and vice versa). Zig's `zig cc` does much of the heavy lifting, but one step in building a portable wheel is the "repair" process which vends native library dependencies into the wheel, necessitating binary patching (auditwheel does this for linux, delocate for macos).

    I wanted to be able to do this cross platform, so I re-implemented ELF patching and Mach-O patching and adhoc signing in Python, and wrapped them into a tool called repairwheel: https://github.com/jvolkman/repairwheel

  • Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
    7 projects | news.ycombinator.com | 17 Feb 2024
    I'll plug some work I've been doing to (attempt to) enable cross compilation of Python wheels. I put together a small example [1] that builds the zstandard wheel, and can build macos wheels on linux and linux wheels on macos using zig cc.

    macos wheels must still be adhoc signed (codesign) and binary patched (install_name_tool), so I re-implemented those functions in Python [2].

    [1] https://github.com/jvolkman/bazel-pycross-zstandard-example

    [2] https://github.com/jvolkman/repairwheel/tree/main/src/repair...

  • Sunday Daily Thread: What's everyone working on this week?
    4 projects | /r/Python | 22 Apr 2023
    I mixed auditwheel, delocate, and delvewheel into a single tool called repairwheel and reimplemented all of the required external tools (patchelf, otool, codesign, etc.) in pure python.

zstd

Posts with mentions or reviews of zstd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-07.
  • Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
    2 projects | news.ycombinator.com | 7 May 2024
    > In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work

    You're not referring to the same dictionary that I am. Look at --train in [1].

    If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).

    If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.

    [1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md

  • Drink Me: (Ab)Using a LLM to Compress Text
    2 projects | news.ycombinator.com | 4 May 2024
    > Doesn't take large amount of GPU resources

    This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...

    My real-world use case for this was JSON files in a particular schema, and the results were fantastic.

  • SQLite VFS for ZSTD seekable format
    2 projects | news.ycombinator.com | 26 Apr 2024
    This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
  • Chrome Feature: ZSTD Content-Encoding
    10 projects | news.ycombinator.com | 1 Apr 2024
    Of course, you may get different results with another dataset.

    gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]

    zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]

    NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.

    NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.

    NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases

    For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.

    xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

    zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

  • Zstandard v1.5.6 – Chrome Edition
    1 project | news.ycombinator.com | 26 Mar 2024
  • Optimizating Rabin-Karp Hashing
    1 project | news.ycombinator.com | 9 Mar 2024
    Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.

    In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.

  • Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
    7 projects | news.ycombinator.com | 17 Feb 2024
  • Cyberpunk 2077 dev release
    1 project | /r/gamedev | 11 Dec 2023
    Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```
  • Honey, I shrunk the NPM package · Jamie Magee
    1 project | news.ycombinator.com | 3 Oct 2023
    I've done that experiment with zstd before.

    https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md...

    Not sure about brotli though.

  • How in the world should we unpack archive.org zst files on Windows?
    2 projects | /r/Archiveteam | 24 May 2023
    If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349

What are some alternatives?

When comparing repairwheel and zstd you can also consider the following projects:

tensorflow-windows-wheel - Tensorflow prebuilt binary for Windows

LZ4 - Extremely Fast Compression algorithm

cibuildwheel - 🎡 Build Python wheels for all the platforms with minimal configuration.

Snappy - A fast compressor/decompressor

twine - Utilities for interacting with PyPI

LZMA - (Unofficial) Git mirror of LZMA SDK releases

py2exe - Create standalone Windows programs from Python code

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard

ZLib - A massively spiffy yet delicately unobtrusive compression library.

brotli - Brotli compression format

haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)

LZFSE - LZFSE compression library and command line tool