bazel-cache VS zstd

Compare bazel-cache vs zstd and see what are their differences.

bazel-cache

Minimal cloud oriented Bazel gRPC cache (by SaveTheRbtz)

zstd

Zstandard - Fast real-time compression algorithm (by facebook)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
bazel-cache zstd
1 107
0 22,445
- 1.5%
0.0 9.7
about 2 years ago 6 days ago
Starlark C
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bazel-cache

Posts with mentions or reviews of bazel-cache. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-22.
  • Casync – A Content-Addressable Data Synchronization Tool
    10 projects | news.ycombinator.com | 22 Apr 2022
    I did PoC experiments with compression, chunking, and IPFS here: https://github.com/SaveTheRbtz/bazel-cache

    If you need a mature compression implementation for bazel I would recommend using recent bazel versions w/ gRPC-based bazel-remote: https://github.com/buchgr/bazel-remote

    bazel nowadays supports end-to-end compression w/ `--experimental_remote_cache_compression`: https://github.com/bazelbuild/bazel/pull/14041

zstd

Posts with mentions or reviews of zstd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-26.
  • Drink Me: (Ab)Using a LLM to Compress Text
    1 project | news.ycombinator.com | 4 May 2024
    Not sure how much performance would drop for realistic use. But there are also some knobs you can tune.

    Refer to:

    https://github.com/facebook/zstd/#dictionary-compression-how...

    https://github.com/facebook/zstd/wiki/Zstandard-as-a-patchin...

        $ man zstd
  • SQLite VFS for ZSTD seekable format
    2 projects | news.ycombinator.com | 26 Apr 2024
    This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
  • Chrome Feature: ZSTD Content-Encoding
    10 projects | news.ycombinator.com | 1 Apr 2024
    Of course, you may get different results with another dataset.

    gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]

    zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]

    NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.

    NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.

    NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases

    For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.

    xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

    zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]

  • Zstandard v1.5.6 – Chrome Edition
    1 project | news.ycombinator.com | 26 Mar 2024
  • Optimizating Rabin-Karp Hashing
    1 project | news.ycombinator.com | 9 Mar 2024
    Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.

    In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.

  • Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
    7 projects | news.ycombinator.com | 17 Feb 2024
  • Cyberpunk 2077 dev release
    1 project | /r/gamedev | 11 Dec 2023
    Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```
  • Honey, I shrunk the NPM package · Jamie Magee
    1 project | news.ycombinator.com | 3 Oct 2023
    I've done that experiment with zstd before.

    https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md...

    Not sure about brotli though.

  • How in the world should we unpack archive.org zst files on Windows?
    2 projects | /r/Archiveteam | 24 May 2023
    If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349
  • Release Zstandard v1.5.5 · facebook/zstd
    1 project | /r/linux | 11 Apr 2023