dduper
bees
dduper | bees | |
---|---|---|
6 | 21 | |
162 | 589 | |
- | - | |
5.4 | 4.0 | |
6 months ago | 16 days ago | |
Python | C++ | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dduper
-
NIST Retires SHA-1 Cryptographic Algorithm
In some cases deduplication happens at the file system layer transparently without you even realizing it. E.g. there are tools like https://github.com/lakshmipathi/dduper
I agree that image editing workflows are a different use case more suited to perceptual hashes than cryptographic hashes.
-
Can I view the internal hash values for files?
ddupper uses a patched btrfs command to read file hases from the raw disk. It requires root access and is kind of a hack.
- Ask HN: Who Wants to Collaborate?
-
Deduplication experiences with various tools?
There are various tools to use for COW deduplication, such as bees, duperemove, rmlint, jdupes, and dduper
-
DSM 7: Release Candidate released!
Hmm, could we use something like dduper to achieve this, if they don't have it included?
bees
-
Converted ext4 to btrfs, tried defrag and ran out of space
Btrfs defrag 'will break up the reflinks of COW data' and 'may cause considerable increase of space usage depending on the broken up reflinks'. To try to fix this, I would run bees to try and deduplicate the now duplicate reflinks. It may be worth doing this from e.g. a livedisk though as out of space errors can cause things to break (so don't upgrade packages till you fix this).
-
Introducing Pins: Permanent Nix Binary Storage
Figuring out which paths are needed outside gcroots'ed closures is pretty complicated. If you're using flakes, the main issue is duplicates, so store optimization and bees may help. With channels, once you update a channel you might as well gc everything else.
-
rule
bees
- Should you remove duplicate files?
-
Poke holes in my git-annex + ZFS offline storage system
I felt more confident with the code/developer/docs. The author knows his stuff regarding btrfs. Like, look at this, it's amazing: https://github.com/Zygo/bees/blob/master/docs/btrfs-kernel.md
-
Anyone running Bees? Or deduping data some other way?
I have some time again and wondering if anyone's got Bees, https://github.com/Zygo/bees, running on their Synology.
-
The goal: Use Fedora 37 with Snapper to get a "riceable" Linux desktop that can be rolled back like a time machine (and some comments on why I don't use Silverblue)
Even if NixOS doesn't support sending deduplicating syscalls to the kernel, you could use the Btrfs deduping daemon called bees to slowly save space over time. There might be an equivalent for ZFS, too.
-
Questions Regarding BTRFS, Suspend, and Data Integrity
This isn't much different than ext4. 0 length files can happen after a crash. You can avoid this by mounting with flushoncommit for the future. See here for details.
-
Compression
Maybe BEES can help you to dedup any blocks, not file.
- Is Bees a after-solution to BTRFS defragmentation breaking reflinks ?
What are some alternatives?
duperemove - Tools for deduping file systems
dupeguru - Find duplicate files
btrbk - Tool for creating snapshots and remote backups of btrfs subvolumes
r8152 - Synology DSM driver for Realtek RTL8152/RTL8153/RTL8156 based adapters
yarn-deduplicate - Deduplication tool for yarn.lock files
jdupes - A powerful duplicate file finder and an enhanced fork of 'fdupes'.
Deduper - The goal of this project is to make a deduper program that anybody can run on their computer to save storage space.
snap-sync - Use snapper snapshots to backup to external drive
Typesense - Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
dedupe - :id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.