zfs
zstd
Our great sponsors
zfs | zstd | |
---|---|---|
718 | 101 | |
10,062 | 22,101 | |
1.5% | 2.0% | |
9.6 | 9.6 | |
1 day ago | about 23 hours ago | |
C | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zfs
- OpenZFS bug reports for native encryption
-
A data corruption bug in OpenZFS?
https://github.com/openzfs/zfs/issues/15526#issuecomment-181...
> zpool get all tank | grep bclone
> kc3000 bcloneused 442M
> kc3000 bclonesaved 1.42G
> kc3000 bcloneratio 4.30x
> My understanding is this: If the result is 0 for both bcloneused and bclonesaved then it's safe to say that you don't have silent corruption.
It's a very rare race condition, odds are very low that you were impacted. If you were, you would have noticed (heavy builds with files being moved around where suddenly files are zero).
[0] https://bugs.gentoo.org/917224
[1] https://github.com/openzfs/zfs/issues/15526 (referenced in the article)
-
Ask HN: What's your "it's not stupid if it works" story?
A couple years ago, I had an idea for convincing a filesystem to go faster using 2 compression steps instead of one. I couldn't see why it wouldn't work, and I also couldn't convince myself it should.
It seems to have worked out. [1]
[1] - https://github.com/openzfs/zfs/commit/f375b23c026aec00cc9527...
-
ZFS Profiling on Arch Linux
https://github.com/openzfs/zfs/issues/7631
This is a long-standing issue with zvols which affects overall system stability, and has no real solution as of yet.
More details on this discussion: https://github.com/openzfs/zfs/issues/6824#issuecomment-1817... . Basically he is too busy to continue developing the encryption features but is able to review the related works.
There is also discussion about using ZFS on LUKS in the same thread: https://github.com/openzfs/zfs/issues/6824#issuecomment-1819...
ZFS on top of LUKS seems to have it's own issues though. :(
-
Tell HN: ZFS silent data corruption bugfix ā my research results
https://github.com/openzfs/zfs/pull/15529#pullrequestreview-...
Honestly, ZFS is the best thing on the (Free)BSDs only... On Linux it doesn't even use the page cache, and you conflict severely with L2ARC. I know there's a variety of people who don't care, but still for real users it's not an actual option.
-
In OpenZFS and Btrfs, everyone was just guessing
A more "correct" fix has been posted https://github.com/openzfs/zfs/pull/15615
Current Master: https://github.com/openzfs/zfs/blob/acb33ee1c169bf1c1f687db1...
When I look up the problem, I could only see the issue being discussed, and probably leading to that commit, and then it was not in my current release (2.1) despite several years later. Iām wondering if ZFS still holds that high standard for reliability.
zstd
- Show HN: macOS-cross-compiler ā Compile binaries for macOS on Linux
-
How in the world should we unpack archive.org zst files on Windows?
If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349
- ZSTD 1.5.5 is released with a corruption fix found at Google
-
Float Compression 3: Filters
Interesting to match with the observations from the practice of using ClickHouse[1][2] for time series:
1. Reordering to SOA helps a lot - this is the whole point of column-oriented databases.
2. Specialized codecs like Gorilla[3], DoubleDelta[4], and FPC[5] lose to simply using ZSTD[6] compression in most cases, both in compression ratio and in performance.
3. Specialized time-series DBMS like InfluxDB or TimescaleDB lose to general-purpose relational OLAP DBMS like ClickHouse [7][8][9].
[1] https://clickhouse.com/blog/optimize-clickhouse-codecs-compr...
[2] https://github.com/ClickHouse/ClickHouse
[3] https://clickhouse.com/docs/en/sql-reference/statements/crea...
[4] https://clickhouse.com/docs/en/sql-reference/statements/crea...
[5] https://clickhouse.com/docs/en/sql-reference/statements/crea...
[6] https://github.com/facebook/zstd/
[7] https://arxiv.org/pdf/2204.09795.pdf "SciTS: A Benchmark for Time-Series Databases in Scientific Experiments and Industrial Internet of Things" (2022)
[8] https://gitlab.com/gitlab-org/incubation-engineering/apm/apm... https://gitlab.com/gitlab-org/incubation-engineering/apm/apm...
[9] https://www.sciencedirect.com/science/article/pii/S187705091...
-
We're wasting money by only supporting gzip for raw DNA files
zstd has a long range mode, which lets it find redundancies a gigabyte away. Try --long and --long=31 for very long range mode.
zstd has delta / patch mode, which creates a file that stores the "patch" to create a new file from an old (reference) file. See https://github.com/facebook/zstd/wiki/Zstandard-as-a-patchin...
See the man page: https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
-
Decompressing the ZST files on Windows tips
So I downloaded the Facebook tool https://github.com/facebook/zstd
-
zstd
They have a nice table on that page: https://github.com/facebook/zstd#benchmarks
Looking at that table, I think LZ4 is a winner. The compression ratio is not too far, compression speed is slightly faster, decompression speed is significantly faster, the code is much simpler so the compiled binary is smaller, and the project is unrelated to Facebook.
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
Snappy - A fast compressor/decompressor
LZMA - (Unofficial) Git mirror of LZMA SDK releases
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
ZLib - A massively spiffy yet delicately unobtrusive compression library.
brotli - Brotli compression format
haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)
LZFSE - LZFSE compression library and command line tool
zlib-ng - zlib replacement with optimizations for "next generation" systems.
zlib - Cloudflare fork of zlib with massive performance improvements
LZHAM - Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++
mydumper - Official MyDumper Project