scorch
Rdiff-backup
scorch | Rdiff-backup | |
---|---|---|
9 | 32 | |
184 | 1,046 | |
- | 1.4% | |
0.0 | 8.3 | |
over 1 year ago | 11 days ago | |
Python | Python | |
ISC License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scorch
-
How do I ensure that I do not get a time-delayed ransomware attack?
The method I use is to run scorch every night to compute hashes for new files and check around 12% of old files for hash errors every night. Even if your backup is the same day as a ransomware attack, you will still catch it if the attack hits enough files for one to get randomly scrubbed. Also scorch is designed around making the hash database small and independent from the rest of the system, so you can automate copying it to a bunch of different places.
- Does this not exists? Checksum program...
-
ZFS or BTRFS for raid0 + backup server
Lastly, you could just point scorch (https://github.com/trapexit/scorch) at your drives and run it on a cron or systemd timer - just have the script alert you with an e-mail or whatever your preferred method is. Not ideal but probably less work than rebuilding two arrays because you don't like the format of error messages.
-
Embarking on my hoarding journey
If you really care, you can use something like scorch or file-digests to get the hashes of your files and just store that in a text file, recalculating monthly. No need to get fancy with it. Hell, write your own simple script that hashes, outputs to file, and checks previous versions.
-
Tool to add checksum to files on EXT4 and verify them.
Not exactly what you're looking for but close -> https://github.com/trapexit/scorch
-
Tool to compare file set against a list of hashes and import new/unique files
Scorch should fit the bill (https://github.com/trapexit/scorch)
- Generate hash for all files in all folders and subfolders on HDD
- Manual File Indexing
- Manual file indexing on my NAS
Rdiff-backup
-
Duplicity
For starters it has a tendency to paint itself into a corner on ENOSPC situations. You won't even be able to perform a restore if a backup was started but unfinished because it ran out of space. There's this process of "regressing" the repo [0] which must occur before you can do practically anything after an interrupted/failed backup. What this actually must do is undo the partial forward progress, by performing what's effectively a restore of the files that got pushed into the future relative to the rest of the repository, which requires more space. Unless you have/can create free space to do these things, it can become wedged... and if it's a dedicated backup system where you've intentionally filled disks up with restore points, you can find yourself having to throw out backups just to make things functional again - even ability to restore is affected.
That's the most obvious glaring problem, beyond that it's just kind of garbage in terms of the amount of space and time it requires to perform restores. Especially restores of files having many reverse-differential increments leading back to the desired restore point. It can require 2X the file's size in spare space to assemble the desired version, while it iteratively reconstructs all the intermediate versions in arriving at the desired version. Unless someone fixed this since I last had to deal with it, which is possible.
Source: Ages ago I worked for a startup[1] that shipped a backup appliance originally implemented by contractors using rdiff-backup. Writing a replacement that didn't suck but was compatible with rdiff-backup's repos consumed several years of my life...
There are far better options in 2024.
[0] https://github.com/rdiff-backup/rdiff-backup/blob/master/src...
[1] https://www.crunchbase.com/organization/axcient
-
Trying to install rdiff-backup on an Oracle Cloud Red Hat VM.
and that should install the latest version, rdiff-backup-2.2.4-2.el8.x86_64.rpm. This is all described in the rdiff-backup README file.
- Cache operation: archive
-
How do I copy data from one HDD to another using Linux Mint?
Rdiff-backup - close to what you do currently but at least provides versioning. Based on rsync
-
Accomplishing What I Want With What I Have
as in just a copy of your files? This I would barely consider a backup, more of just a mirror from a point in time. What're you missing by doing this? versions of files, deduplication, and encryption (last one being very important for the best kind of backups, which should be off-site). Just because it's not files doesn't mean it's proprietary. Proprietary would mean secret and undocumented. There are many great options. Borg is my favorite but Kopia is probably better if you use windows, urbackup is an option if you want centralized management of backups and rdiff-backup is if you want something kinda what you have currently but adding versioning but lacks deduplication and encryption.
-
Backup software recommendation
If you're comfortable with the cli and you want to have your backup in a plain file format with some incremental backups, there's rdiffbackup. It uses rsync under the hood and has worked quite well for me.
-
Name a program that doesn't get enough love!
Rdiff Backup - Reverse differential backups that uses rsync, linking, and can tunnel via ssh. You get a full current backup with increments available to restore any version of the file with minimal storage space used.
-
BorgBackup, Deduplicating archiver with compression and encryption
borg is great. we've been using it for the past 3 years to archive hundreds of file-level backups of servers, database dumps and VM images. average size of each borg repo is few GB but there are few outliers up to few hundreds of GB.
borg replaced https://rdiff-backup.net/ for us and gave:
-
Advice for Automated Copying of my Off Grid 6TB Media Hoard :)
Robocopy is great if you don't have access to rsync. If rsync via WSL2 for instance is an option, I'd personally go with rdiffbackup.
- Do incremental backups generally store only the delta of each file change or the entire new file?
What are some alternatives?
cshatag - Detect silent data corruption under Linux using sha256 stored in extended attributes
BorgBackup - Deduplicating archiver with compression and authenticated encryption.
file-digests - 📐 A tool to check if there are any changes in your files by storing and later checking their digests/hashes (BLAKE2b512, SHA3-256, or SHA512-256).
restic - Fast, secure, efficient backup program
znapzend - zfs backup with remote capabilities and mbuffer integration.
Rsnapshot - a tool for backing up your data using rsync (if you want to get help, use https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss)
CalCorrupt - File corrupter using PyQt5
syncthing-android - Wrapper of syncthing for Android.
HashCheck - HashCheck Shell Extension for Windows with added SHA2, SHA3, and multithreading; originally from code.kliu.org
Duplicity - Unnoficial fork of Duplicity - Bandwidth Efficient Encrypted Backup
honst - Fixes your dataset according to your rules.
UrBackup - UrBackup - Client/Server Open Source Network Backup for Windows, MacOS and Linux