zstd
zfs
zstd | zfs | |
---|---|---|
109 | 721 | |
22,480 | 10,161 | |
1.7% | 1.0% | |
9.7 | 9.7 | |
1 day ago | 2 days ago | |
C | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zstd
-
Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
> In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work
You're not referring to the same dictionary that I am. Look at --train in [1].
If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).
If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.
[1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
-
Drink Me: (Ab)Using a LLM to Compress Text
> Doesn't take large amount of GPU resources
This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...
My real-world use case for this was JSON files in a particular schema, and the results were fantastic.
-
SQLite VFS for ZSTD seekable format
This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
-
Chrome Feature: ZSTD Content-Encoding
Of course, you may get different results with another dataset.
gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]
zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]
NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.
NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.
NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases
For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.
xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
- Zstandard v1.5.6 – Chrome Edition
-
Optimizating Rabin-Karp Hashing
Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.
In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.
- Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
-
Cyberpunk 2077 dev release
Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```
-
Honey, I shrunk the NPM package · Jamie Magee
I've done that experiment with zstd before.
https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md...
Not sure about brotli though.
-
How in the world should we unpack archive.org zst files on Windows?
If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349
zfs
- OpenZFS 2.2.4 – Linux and FreeBSD – Advanced file system and volume manager
-
Ubuntu 24.04 LTS is so buggy you can't install the OS [video]
Be careful if you use ZFS-on-root, make sure not to snapshot bpool or it will brick your system and require a complete reinstall.
https://github.com/openzfs/zfs/issues/13873
-
Radxa's SATA HAT makes compact Pi 5 NAS
> The only non-junk PCIe3 option that's even advertised here recently is the overpriced WD Red SN700.
Those WD drives seem to have some real issues, at least with ZFS and btrfs. :(
https://github.com/openzfs/zfs/discussions/14793
- OpenZFS: Fix corruption caused by MMAP flushing problems
- ZFS: Some copied files are still corrupted (chunks replaced by zeros)
-
DiskClick: Ever wanted to hear Old Hard drive sounds
IMO the "next fs" is just zfs. They somewhat recently merged RAIDZ expansion feature https://github.com/openzfs/zfs/pull/12225 and make regular improvements. If no file system has what you need today, zfs will probably be the first one to have it "tomorrow," imo.
- OpenZFS bug reports for native encryption
-
A data corruption bug in OpenZFS?
https://github.com/openzfs/zfs/issues/15526#issuecomment-181...
> zpool get all tank | grep bclone
> kc3000 bcloneused 442M
> kc3000 bclonesaved 1.42G
> kc3000 bcloneratio 4.30x
> My understanding is this: If the result is 0 for both bcloneused and bclonesaved then it's safe to say that you don't have silent corruption.
-
Ask HN: What's your "it's not stupid if it works" story?
A couple years ago, I had an idea for convincing a filesystem to go faster using 2 compression steps instead of one. I couldn't see why it wouldn't work, and I also couldn't convince myself it should.
It seems to have worked out. [1]
[1] - https://github.com/openzfs/zfs/commit/f375b23c026aec00cc9527...
-
ZFS Profiling on Arch Linux
https://github.com/openzfs/zfs/issues/7631
This is a long-standing issue with zvols which affects overall system stability, and has no real solution as of yet.
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
Snappy - A fast compressor/decompressor
sanoid - These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)
LZMA - (Unofficial) Git mirror of LZMA SDK releases
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.
snapper - Manage filesystem snapshots and allow undo of system modifications
ZLib - A massively spiffy yet delicately unobtrusive compression library.
zfsbootmenu - ZFS Bootloader for root-on-ZFS systems with support for snapshots and native full disk encryption
brotli - Brotli compression format
zrepl - One-stop ZFS backup & replication solution