Butterfly-Backup
Rdiff-backup
Our great sponsors
Butterfly-Backup | Rdiff-backup | |
---|---|---|
8 | 32 | |
111 | 1,031 | |
- | 3.2% | |
8.1 | 8.5 | |
about 1 month ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Butterfly-Backup
We haven't tracked posts mentioning Butterfly-Backup yet.
Tracking mentions began in Dec 2020.
Rdiff-backup
-
Duplicity
For starters it has a tendency to paint itself into a corner on ENOSPC situations. You won't even be able to perform a restore if a backup was started but unfinished because it ran out of space. There's this process of "regressing" the repo [0] which must occur before you can do practically anything after an interrupted/failed backup. What this actually must do is undo the partial forward progress, by performing what's effectively a restore of the files that got pushed into the future relative to the rest of the repository, which requires more space. Unless you have/can create free space to do these things, it can become wedged... and if it's a dedicated backup system where you've intentionally filled disks up with restore points, you can find yourself having to throw out backups just to make things functional again - even ability to restore is affected.
That's the most obvious glaring problem, beyond that it's just kind of garbage in terms of the amount of space and time it requires to perform restores. Especially restores of files having many reverse-differential increments leading back to the desired restore point. It can require 2X the file's size in spare space to assemble the desired version, while it iteratively reconstructs all the intermediate versions in arriving at the desired version. Unless someone fixed this since I last had to deal with it, which is possible.
Source: Ages ago I worked for a startup[1] that shipped a backup appliance originally implemented by contractors using rdiff-backup. Writing a replacement that didn't suck but was compatible with rdiff-backup's repos consumed several years of my life...
There are far better options in 2024.
[0] https://github.com/rdiff-backup/rdiff-backup/blob/master/src...
-
How do I copy data from one HDD to another using Linux Mint?
Rdiff-backup - close to what you do currently but at least provides versioning. Based on rsync
-
Accomplishing What I Want With What I Have
as in just a copy of your files? This I would barely consider a backup, more of just a mirror from a point in time. What're you missing by doing this? versions of files, deduplication, and encryption (last one being very important for the best kind of backups, which should be off-site). Just because it's not files doesn't mean it's proprietary. Proprietary would mean secret and undocumented. There are many great options. Borg is my favorite but Kopia is probably better if you use windows, urbackup is an option if you want centralized management of backups and rdiff-backup is if you want something kinda what you have currently but adding versioning but lacks deduplication and encryption.
-
Name a program that doesn't get enough love!
Rdiff Backup - Reverse differential backups that uses rsync, linking, and can tunnel via ssh. You get a full current backup with increments available to restore any version of the file with minimal storage space used.
-
BorgBackup, Deduplicating archiver with compression and encryption
borg is great. we've been using it for the past 3 years to archive hundreds of file-level backups of servers, database dumps and VM images. average size of each borg repo is few GB but there are few outliers up to few hundreds of GB.
borg replaced https://rdiff-backup.net/ for us and gave:
-
Advice for Automated Copying of my Off Grid 6TB Media Hoard :)
Robocopy is great if you don't have access to rsync. If rsync via WSL2 for instance is an option, I'd personally go with rdiffbackup.
- Do incremental backups generally store only the delta of each file change or the entire new file?
- How do I ensure that I do not get a time-delayed ransomware attack?
-
Best backup software?
For incremental backups rdiff-backup is practical too.
You have to try them out and decide for yourself of course. I've used to using Vorta which was great at least on openSUSE. Then for cli backup solution I used rdiff-backup which was also really good
What are some alternatives?
BorgBackup - Deduplicating archiver with compression and authenticated encryption.
restic - Fast, secure, efficient backup program
Rsnapshot - a tool for backing up your data using rsync (if you want to get help, use https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss)
Duplicity - Unnoficial fork of Duplicity - Bandwidth Efficient Encrypted Backup
syncthing-android - Wrapper of syncthing for Android.
UrBackup - UrBackup - Client/Server Open Source Network Backup for Windows, MacOS and Linux
Duplicacy - A new generation cloud backup tool
Bup - Very efficient backup system based on the git packfile format, providing fast incremental saves and global deduplication (among and within files, including virtual machine images). Please post problems or patches to the mailing list for discussion (see the end of the README below).
TimeShift - System restore tool for Linux. Creates filesystem snapshots using rsync+hardlinks, or BTRFS snapshots. Supports scheduled snapshots, multiple backup levels, and exclude filters. Snapshots can be restored while system is running or from Live CD/USB.
Duplicati - Store securely encrypted backups in the cloud!
Elkarbackup - Open source backup solution for your network
virtnbdbackup - Backup utility for Libvirt / qemu / kvm supporting incremental and differential backups + instant recovery (agentless).