bees
zfs
bees | zfs | |
---|---|---|
21 | 720 | |
589 | 10,140 | |
- | 0.6% | |
4.0 | 9.7 | |
15 days ago | about 20 hours ago | |
C++ | C | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bees
-
Converted ext4 to btrfs, tried defrag and ran out of space
Btrfs defrag 'will break up the reflinks of COW data' and 'may cause considerable increase of space usage depending on the broken up reflinks'. To try to fix this, I would run bees to try and deduplicate the now duplicate reflinks. It may be worth doing this from e.g. a livedisk though as out of space errors can cause things to break (so don't upgrade packages till you fix this).
-
Introducing Pins: Permanent Nix Binary Storage
Figuring out which paths are needed outside gcroots'ed closures is pretty complicated. If you're using flakes, the main issue is duplicates, so store optimization and bees may help. With channels, once you update a channel you might as well gc everything else.
-
rule
bees
- Should you remove duplicate files?
-
Poke holes in my git-annex + ZFS offline storage system
I felt more confident with the code/developer/docs. The author knows his stuff regarding btrfs. Like, look at this, it's amazing: https://github.com/Zygo/bees/blob/master/docs/btrfs-kernel.md
-
Anyone running Bees? Or deduping data some other way?
I have some time again and wondering if anyone's got Bees, https://github.com/Zygo/bees, running on their Synology.
-
The goal: Use Fedora 37 with Snapper to get a "riceable" Linux desktop that can be rolled back like a time machine (and some comments on why I don't use Silverblue)
Even if NixOS doesn't support sending deduplicating syscalls to the kernel, you could use the Btrfs deduping daemon called bees to slowly save space over time. There might be an equivalent for ZFS, too.
-
Questions Regarding BTRFS, Suspend, and Data Integrity
This isn't much different than ext4. 0 length files can happen after a crash. You can avoid this by mounting with flushoncommit for the future. See here for details.
-
Compression
Maybe BEES can help you to dedup any blocks, not file.
- Is Bees a after-solution to BTRFS defragmentation breaking reflinks ?
zfs
-
Ubuntu 24.04 LTS is so buggy you can't install the OS [video]
Be careful if you use ZFS-on-root, make sure not to snapshot bpool or it will brick your system and require a complete reinstall.
https://github.com/openzfs/zfs/issues/13873
-
Radxa's SATA HAT makes compact Pi 5 NAS
> The only non-junk PCIe3 option that's even advertised here recently is the overpriced WD Red SN700.
Those WD drives seem to have some real issues, at least with ZFS and btrfs. :(
https://github.com/openzfs/zfs/discussions/14793
- OpenZFS: Fix corruption caused by MMAP flushing problems
- ZFS: Some copied files are still corrupted (chunks replaced by zeros)
-
DiskClick: Ever wanted to hear Old Hard drive sounds
IMO the "next fs" is just zfs. They somewhat recently merged RAIDZ expansion feature https://github.com/openzfs/zfs/pull/12225 and make regular improvements. If no file system has what you need today, zfs will probably be the first one to have it "tomorrow," imo.
- OpenZFS bug reports for native encryption
-
A data corruption bug in OpenZFS?
https://github.com/openzfs/zfs/issues/15526#issuecomment-181...
> zpool get all tank | grep bclone
> kc3000 bcloneused 442M
> kc3000 bclonesaved 1.42G
> kc3000 bcloneratio 4.30x
> My understanding is this: If the result is 0 for both bcloneused and bclonesaved then it's safe to say that you don't have silent corruption.
-
Ask HN: What's your "it's not stupid if it works" story?
A couple years ago, I had an idea for convincing a filesystem to go faster using 2 compression steps instead of one. I couldn't see why it wouldn't work, and I also couldn't convince myself it should.
It seems to have worked out. [1]
[1] - https://github.com/openzfs/zfs/commit/f375b23c026aec00cc9527...
-
ZFS Profiling on Arch Linux
https://github.com/openzfs/zfs/issues/7631
This is a long-standing issue with zvols which affects overall system stability, and has no real solution as of yet.
-
Using ZFS on single disks, combining them with mergerfs, and paritizing them with Snapraid
TIL. Thank you! https://github.com/openzfs/zfs/pull/15022
What are some alternatives?
dduper - Fast block-level out-of-band BTRFS deduplication tool.
zstd - Zstandard - Fast real-time compression algorithm
duperemove - Tools for deduping file systems
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
btrbk - Tool for creating snapshots and remote backups of btrfs subvolumes
sanoid - These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)
yarn-deduplicate - Deduplication tool for yarn.lock files
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.
jdupes - A powerful duplicate file finder and an enhanced fork of 'fdupes'.
snapper - Manage filesystem snapshots and allow undo of system modifications
snap-sync - Use snapper snapshots to backup to external drive
zfsbootmenu - ZFS Bootloader for root-on-ZFS systems with support for snapshots and native full disk encryption