zfsnapr
Duplicacy
zfsnapr | Duplicacy | |
---|---|---|
7 | 136 | |
22 | 5,016 | |
- | - | |
5.6 | 5.6 | |
8 months ago | about 1 month ago | |
Ruby | Go | |
BSD 2-clause "Simplified" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zfsnapr
-
Kopia: Open-Source, Fast and Secure Open-Source Backup Software
FreeBSD had a pretty decent option in the base system two decades ago - FFS snapshots and a stock backup tool that would use them automatically with minimal effort, dump(8). Just chuck `-L` at it and your backups are consistent.
Now of course it's all about ZFS, so there's at least snapshots paired with replication - but the story for anything else is still pretty bad, with you having to put all the fiddly pieces together. I'm sure some people taught their backup tool about their special named backup snapshots sprinkled about in `.zfs/snapshot` directories, but given the fiddly nature of it I'm also sure most people just ended up YOLOing raw directories, temporal-smearing be damned.
I know I did!
I finally got around to fixing that last year with zfsnapr[1]. `zfsnapr mount /mnt/backup` and there's a snapshot of the system - all datasets, mounted recursively - ready for whatever backup tool of the year is.
I'm kind of disappointed in mentioning it over on the Practical ZFS forum that the response was not "why didn't you just use ", but "I can see why that might be useful".
Well, yes, it makes backups actually work.
> Also, it's unclear to me what happens if you attempt a snapshot in the middle of something like a database transaction or even a basic file write. Seems likely that the snapshot would still be corrupted
A snapshot is a point-in-time image of the filesystem at a given point. Any ACID database worth the name will roll back the in-flight transaction just like they would if you issued it a `kill -9`.
For other file writes, that's really down to whether or not such interruptions were considered by the writer. You may well have half-written files in your snapshot, with the file contents as they were in between two write() calls. Ideally this will only be in the form of temporary files, prior to their rename() over the data they're replacing.
For everything else - well, you have more than one snapshot backed up, right?
1: https://github.com/Freaky/zfsnapr
-
ZFS for Dummies
I make remote snapshot backups with Borg using this: https://github.com/Freaky/zfsnapr
zfsnapr mounts recursive snapshots on a target directory so you can just point whatever backup tool you like at a normal directory tree.
I still use send/recv for local backups - I think it's good to have a mix of strategies.
-
BorgBackup, Deduplicating archiver with compression and encryption
This is why I made https://github.com/Freaky/zfsnapr
Instead of working out how to teach my backup tools about snapshots, I just mount them in a subtree and use that as a chroot env.
-
Ask HN: Can I see your scripts?
borg-backup.sh, which runs my remote borg backups off a cronjob: https://github.com/Freaky/borg-backup.sh
zfsnapr, a ZFS recursive snapshot mounter - I run borg-backup.sh using this to make consistent backups: https://github.com/Freaky/zfsnapr
mkjail, an automatic minimal FreeBSD chroot environment builder: https://github.com/Freaky/mkjail
run-one, a clone of the Ubuntu scripts of the same name, which provides a slightly friendlier alternative to running commands with flock/lockf: https://github.com/Freaky/run-one
-
Correct Backups Require Filesystem Snapshots
I wrote https://github.com/Freaky/zfsnapr a few months ago so I could finally have point-in-time consistent Borg backups with ZFS snapshots, without having the mess of teaching Borg where every .zfs directory was.
It recursively snapshots mounted pools, and recursively mounts snapshots of the mounted datasets into a target ready to point your backup tools at. I do so via a chroot so I didn't need to make any changes to my Borg setup - just to how I run it.
-
Snapshot stat changes on access
This is the approach I take with zfssnapr - make a recursive snapshot of pools and then use mountpoint/canmount to recursively mount datasets on a location. Then I can just point borg at it without having to teach it where exactly each .zfs directory is.
- zfsnapr — recursively mount a system snapshot on a given location
Duplicacy
- Rclone syncs your files to cloud storage
-
Duplicity
I have been having great luck with incremental backups with the very similar named Duplicacy https://duplicacy.com/
- Restic – Simple Backups
- A new generation cross-platform cloud backup tool
-
Researching what to use for purely local Linux home server backup (no cloud backups)
Pro: No need for a special index database. The chunks are placed in the file system. This explains it in greater detail. Seems to place great emphasis on reliability, which is important for me. Versioning is also supported.
-
Your privacy is optional
Having all your data in one place isn't wise though, so I am planning on storing encrypted backups on Dropbox and Backblaze B2 using Duplicity so that I am following the 3-2-1 backup rule.
- Kopia: Open-Source, Fast and Secure Open-Source Backup Software
-
Ask HN: How do you do backups for personal/home server?
I tried a bunch of different ways but ultimately settled on Duplicacy [0].
It runs inside a Docker container and backs up both my data as well as configurations like my docker compose file and smb.conf.
Off site storage was Backblaze B2, but I moved to Hetzner. Likely will move back just because B2 is cheaper and a bit faster for my region.
Another layer of backup I do is use Duplicacy to backup to a portable hard drive occasionally that I keep off site.
[0] https://duplicacy.com/
-
Before I deploy to several computers: UrBackup, Bacula, Duplicati or Syncovery (paid)?
Duplicacy
-
Kopia VS duplicati for homeserver backups
I use Kopia and works well. Have also used this https://duplicacy.com
What are some alternatives?
BorgBackup - Deduplicating archiver with compression and authenticated encryption.
restic - Fast, secure, efficient backup program
ioztat - ioztat is a storage load analysis tool for OpenZFS. It provides iostat-like statistics at an individual dataset/zvol level.
Duplicati - Store securely encrypted backups in the cloud!
benchmarks - Benchmarks of different backup tools.
rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files
RcloneZFSBackup - Backup ZFS snapshots to cloud storage using RCLone
borgmatic - Simple, configuration-driven backup software for servers and workstations
kopia - Cross-platform backup tool for Windows, macOS & Linux with fast, incremental backups, client-side end-to-end encryption, compression and data deduplication. CLI and GUI included.
borgtui - A nice TUI for BorgBackup
borg - Search and save shell snippets without leaving your terminal