zrepl
zfs_autobackup
Our great sponsors
zrepl | zfs_autobackup | |
---|---|---|
22 | 20 | |
895 | 520 | |
1.2% | - | |
6.8 | 7.7 | |
about 1 month ago | 8 days ago | |
Go | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zrepl
- Zrepl – ZFS replication
- zrepl: A one-stop, integrated solution for ZFS replication
- PVE Host disk upgrade
-
Void Linux and root-on-ZFS question
Lastly there is zrepl. This is an automatic snapshot creation, pruning and replication daemon. It lets you automate the creation of ZFS snapshots at specific intervals, apply a retention policy to them, and replicate them out to a remote system with ZFS, like a NAS with TrueNAS on it.
- Container Updating Strategies
-
How do you all prepare for a disaster recovery of your nextcloud instance?
I run it in a FreeBSD jail and take frequent ZFS snapshots using zrepl. I’ve had to restore after failed updates and it worked flawlessly.
-
Recommend ZFS automation scripts for off-server backups?
As others have noted, I use syncoid but zrepl is an alternative that could be considered.
-
Imagine You're a Goofball: Dynamic Preventative ZFS Snapshots
I’m going to throw out Zrepl again because it’s amazing: https://zrepl.github.io/
- Using ZFS backup drive for rsync manually
-
Question about best way to do zfs replication to a friends server
Current idea: We offer some form of container each other, where we have that zvol mounted somewhere for the replication use. This would mean that I could create an "inside zfs" where I create a filebased zpool. How is performance on a filebased zpool? (Although it is technically not so important that it performs critically, just trying to find the - or one of - "best" ways). Then we would offer each an ssh endpoint into that container (It doesn't have to be ssh, but it is so far convenient to setup and resilient to be open to the public). And my current plan is to use zrepl (https://zrepl.github.io/) to organise replication from my home server to this "inside" zpool.
zfs_autobackup
-
Is It Good Practice to Back Up Data Sets Manually on Cold Storage Externals?
I recommend you to get familiar with https://github.com/psy0rz/zfs_autobackup/wiki first and then use the configuration you are happy with in my script.
- ZFS-autobackup – a lightweight but featureful ZFS replication solution
-
Backup Solution With Details About Deleted Files
I have a small home server running Ubuntu Server with a ZFS file system. To back up my files and use snapshots, I am currently using zfs-autobackup, which is easy to set up and works really well for ZFS. However, zfs-autobackup does not provide any information on files or folders that have been created, updated, or deleted from previous backups. While I am not too concerned about newly created or updated files, I would like to know which files have been deleted compared to the previous backup. Ideally, I would like to be able to see which files have been deleted before the backup takes place, so I can recover any unwanted deletions before it's too late.
-
Can't receive external ZFS dataset, can't get ZFS version info
as described here https://github.com/psy0rz/zfs_autobackup/issues/170
-
Move data to another pool?
Use ZFS Autobackup https://github.com/psy0rz/zfs_autobackup or ZFS send and receive to create a clone of an existing pool on a new system https://www.thegeekdiary.com/zfs-tutorials-creating-zfs-snapshot-and-clones/
-
Proxmox Backup Server Storage Analysis
I just went back to sanoid. I have also looked at zfs-autobackup and might give that a try. I've been combining sanoid/syncoid with this script to clean up snapshots
- zfs-check : A tool to verify your ZFS backups
- How can I zfs send an encrypted dataset tree to a pool in a way that will use this pool key?
-
Simple bash script for sending incremental snapshots daily
If you want it to work really well in all circumstances, it WILL get complex. I tried to keep the code as readable as i could though. There where many refactors. Check it out: https://github.com/psy0rz/zfs_autobackup
-
ZFS backups - Sanoid and Syncoid help
Also have a look at https://github.com/psy0rz/zfs_autobackup .
What are some alternatives?
sanoid - These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)
cockpit-zfs-manager - Cockpit ZFS Manager is an interactive ZFS on Linux admin package for Cockpit.
zfs - OpenZFS on Linux and FreeBSD
lxd-snapper - LXD snapshots, automated
znapzend - zfs backup with remote capabilities and mbuffer integration.
zfswatcher - ZFS pool monitoring and notification daemon
zfs-auto-snapshot - ZFS Automatic Snapshot Service for Linux
zfsbackup-go - Backup ZFS snapshots to cloud storage such as Google, Amazon, Azure, etc. Built with the enterprise in mind.
zfs-replicate - A zfs send wrapper somewhat in the style of rsync
yunohost - YunoHost is an operating system aiming to simplify as much as possible the administration of a server. This repository corresponds to the core code, written mostly in Python and Bash.
zfs-syncer