snapraid VS mergerfs

Compare snapraid vs mergerfs and see what are their differences.

snapraid

A backup program for disk arrays. It stores parity information of your data and it recovers from up to six disk failures (by amadvance)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
snapraid mergerfs
86 163
1,803 3,803
- -
6.7 7.9
2 months ago 1 day ago
C C++
GNU General Public License v3.0 only GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

snapraid

Posts with mentions or reviews of snapraid. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-08.
  • Storage software with the features of Unraid but runs on Debian with cli interface?
    3 projects | /r/HomeServer | 8 Dec 2023
    Would mergerfs and snapraid work for you? You'd sacrifice a disk to parity and run the parity calc manually, but you could set up a cron job for that.
  • The Next Gen Database Servers Powering Let's Encrypt(2021)
    5 projects | news.ycombinator.com | 17 Sep 2023
    Like most people on r/homelab, it started out with Plex. Rough timeline/services below:

    0. Got a Synology DS413 with 4x WD Red 3TB drives. Use Playstation Media Server to stream videos from it. Eventually find some Busybox stuff to add various functionality to the NAS, but it had a habit of undoing them periodically, which was frustrating. I also experienced my first and (knock on wood) only drive failure during this time, which concluded without fanfare once the faulty drive was replaced, and the array repaired itself.

    1. While teaching self Python as an Electrical Distribution Engineer at a utility, I befriended the IT head, who gave me an ancient (I think Nehalem? Quad-core Xeon) Dell T310. Promptly got more drives, totaling 7, and tried various OS / NAS platforms. I had OpenMediaVault for a while, but got tired of the UI fighting me when I knew how to do things in shell, so I switched to Debian (which it's based on anyway). Moved to MergerFS [0] + SnapRAID [1] for storage management, and Plex for media. I was also tinkering with various Linux stuff on it constantly.

    1.1 Got tired of my tinkering breaking things and requiring troubleshooting/fixing (in retrospect, this provided excellent learning), so I installed Proxmox, reinstalled Debian, and made a golden image with everything set up as desired so I could easily revert.

    1.2 A friend told me about Docker. I promptly moved Plex over to it, and probably around this time also got the *Arr Stack [2] going.

    2. Got a Supermicro X9DRi-LN4F+ in a 2U chassis w/ 12x 3.5" bays. Got faster/bigger CPUs (E5-2680v2), more RAM, more drives, etc. Shifted container management to Docker Compose. Modded the BIOS to allow it to boot from a NVMe drive on a PCIe adapter.

    2.1 Shifted to ZFS on Debian. Other than DKMS occasionally losing its mind during kernel upgrades, this worked well.

    2.2 Forked [3] some [4] Packer/Ansible projects to suit my needs, made a VM for everything. NAS, Dev, Webserver, Docker host, etc. Other than outgrowing (IMO) MergerFS/SnapRAID, honestly at this point I could have easily stopped, and could to this day revert back to this setup. It was dead reliable and worked extremely well. IIRC I was also playing with Terraform at this time.

    2.3 Successfully broke into tech (Associate SRE) as a mid-career shift, due largely (according to the hiring manager) to what I had done with my homelab. Hooray for hobbies paying off.

    3. Got a single Dell R620. I think the idea was to install either pfSense or VyOS on it, but that never came to fruition. Networking was from a Unifi USG (their tiny router + firewall + switch) and 8-port switch, with some AC Pro APs.

    4. Got two more R620s. Kubernetes all the things. Each one runs Proxmox in a 3-node cluster with two VMs - a control plane, and worker.

    4.0.1 Perhaps worth noting here that I thoroughly tested my migration plan via spinning up some VMs in, IIRC, Digital Ocean that mimicked my home setup. I successfully ran it twice, which was good enough for me.

    4.1 Played with Ceph via Rook, but a. disliked (and still to this day) running storage for everything out of K8s b. kept getting clock skew between nodes. Someone on Reddit mentioned it was my low-power C-state settings, but since that was saving me something like ~50 watts/node, I didn't want to deal with the higher power/heat. I landed on Longhorn [5] for cluster storage (i.e. anything that wasn't being handled by the ZFS pool), which was fine, but slow. SATA SSDs (used Intel enterprise drives with PLP, if you're wondering) over GBe aren't super fast, but they should be able to exceed 30 MBps.

    4.1.1 Again, worth noting that I spent literally a week poring over every bit of Ceph documentation I could find, from the Red Hat stuff to random Wikis and blog posts. It's not something you just jump into, IMO, and most of the horror stories I read boiled down to "you didn't follow the recommended practices."

    5. Got a newer Supermicro, an X11SSH-F, thinking that it would save power consumption over the older dual-socket I had for the NAS. It turned out to not make a big difference. For some reason I don't recall, I had a second X9DRi-LN4F+ mobo, so I sold the other one with the faster CPUs, bought some cheaper CPUs for the other one, and bought more drives for it. It's now a backup target that boots up daily to ingest ZFS snapshots. I have 100% on-site backups for everything. Important things (i.e. anything that I can't get from a torrent) are also off-site.

    6. Got some Samsung PM863 NVMe SSDs mounted on PCIe adapters for the Dells, and set up Ceph, but this time handled by Proxmox. It's dead easy, and for whatever reason isn't troubled by the same clock skew issues as I had previously. Still in the process of shifting cluster storage from Longhorn, but I have been successfully using Ceph block storage as fast (1 GBe, anyway - a 10G switch is on the horizon) storage for databases.

    So specifically, you asked what I do with the hardware. What I do, as far as my family is concerned, is block ads and serve media. On a more useful level, I try things out related to my job, most recently database-related (I moved from SRE to DBRE a year ago). I have MySQL and Postgres running, and am constantly playing with them. Can you actually do a live buffer pool resize in MySQL? (yes) Is XFS actually faster than ext4 for large DROP TABLE operations? (yes, but not by much) Is it faster to shut down a MySQL server and roll back to a previous ZFS snapshot than to rollback a big transaction? (often yes, although obviously a full shutdown has its own problems) Does Postgres suffer from the same write performance issue as MySQL with random PKs like UUIDv4, despite not clustering by default? (yes, but not to the same extent - still enough to matter, and you should use UUIDv7 if you absolutely need them)

    I legitimately love this stuff. I could quite easily make do without a fancy enclosed rack and multiple servers, but I like them, so I have them. The fact that it tends to help my professional growth out at the same time is a bonus.

    [0]: https://github.com/trapexit/mergerfs

    [1]: https://www.snapraid.it

    [2]: https://wiki.servarr.com

    [3]: https://github.com/stephanGarland/packer-proxmox-templates

    [4]: https://github.com/stephanGarland/ansible-initial-server

    [5]: https://longhorn.io

  • Merge/Raid HDD documentation
    2 projects | /r/CasaOS | 29 Jun 2023
    You can always use SnapRAID . there is no user interface, it is CLI. also you have to sync it manually. or set up a cronjob. you loose a hdd like unRaid or RAID5 but it gives you parity. then you could always use duplicati and backblaze business to make backups. it isnt as expensive as you would think for a homelab. the first back up might be a little much but then its pennies after that
  • Converting my old pc to a backup solution
    2 projects | /r/HomeServer | 9 May 2023
    As for the drives I'm thinking of grabbing a few from ServerPartDeals and upgrading my setup that uses DrivePool and snapRAID, but in Linux you would use mergerfs instead of DrivePool.
  • Thinking of switching from a 4 bay hardware RAID 5 to an 8 bay JBOD. Looking for opinions.
    3 projects | /r/HomeServer | 4 May 2023
    I myself prescribe to the teachings of the IronicBadger(Alex Kretzshmar) from the Self-Hosted podcast and (when I get one setup) intend to follow the guides on his site https://perfectmediaserver.com and use mergerfs to turn a JBOD to a single filesystem and use SnapRAID for redundancy.
  • RockPro64 boot issues
    2 projects | /r/PINE64official | 1 Apr 2023
    Since I store static files shared via NFS and Samba I use snapraid run every night to make a parity file (backup of a backup haha), mergefs to simulate one big drive, and each drive is encrypted with luks. Not a great setup for a DB but could also partition and RAIDx just those. I can lose 1 disk with 0% loss but 2 disks and I loose only the files on those 2 disks (was less of a loss when I had 10 drives). I also can also have separate heads reading files if they happen to be on different drives.
  • Looking for Feedback on my OMV6 setup
    2 projects | /r/OpenMediaVault | 9 Mar 2023
    Since migrating to ZFS means copying all the data twice (of from the HDD, and in the ZFS Pool) and your data is rarely changing i would suggest going with Snapraid (a "Raid" on Filesystem basis) and MergerFS (merging all drives under one Path), you can start with already Filled drives so no copying involved.
  • Is there something similar to RAID 0 but with directory-level striping?
    2 projects | /r/HomeServer | 3 Mar 2023
    just for completion, I'll throw snapraid in to the mix, if you would like some parity data, to protect against bit rot and disk failure ;) (doesn't replace a good backup tho...)
  • Those of you with 100TB+, what do you do for backups?
    3 projects | /r/DataHoarder | 25 Feb 2023
  • Proxmox raid 5 or 6 config
    2 projects | /r/Proxmox | 23 Feb 2023
    I run a combination of snapraid for data I can replace but a safety net when a drive eventually fails, and ZFS for data that is not easily or cannot be replaced. That data is also backed up, of course. VM/container storage is always ZFS mirrors for me.

mergerfs

Posts with mentions or reviews of mergerfs. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • How do I use multiple hard drives on Kubuntu for steam?
    2 projects | /r/Kubuntu | 10 Dec 2023
    Have a look at mergerfs.
  • The Next Gen Database Servers Powering Let's Encrypt(2021)
    5 projects | news.ycombinator.com | 17 Sep 2023
    Like most people on r/homelab, it started out with Plex. Rough timeline/services below:

    0. Got a Synology DS413 with 4x WD Red 3TB drives. Use Playstation Media Server to stream videos from it. Eventually find some Busybox stuff to add various functionality to the NAS, but it had a habit of undoing them periodically, which was frustrating. I also experienced my first and (knock on wood) only drive failure during this time, which concluded without fanfare once the faulty drive was replaced, and the array repaired itself.

    1. While teaching self Python as an Electrical Distribution Engineer at a utility, I befriended the IT head, who gave me an ancient (I think Nehalem? Quad-core Xeon) Dell T310. Promptly got more drives, totaling 7, and tried various OS / NAS platforms. I had OpenMediaVault for a while, but got tired of the UI fighting me when I knew how to do things in shell, so I switched to Debian (which it's based on anyway). Moved to MergerFS [0] + SnapRAID [1] for storage management, and Plex for media. I was also tinkering with various Linux stuff on it constantly.

    1.1 Got tired of my tinkering breaking things and requiring troubleshooting/fixing (in retrospect, this provided excellent learning), so I installed Proxmox, reinstalled Debian, and made a golden image with everything set up as desired so I could easily revert.

    1.2 A friend told me about Docker. I promptly moved Plex over to it, and probably around this time also got the *Arr Stack [2] going.

    2. Got a Supermicro X9DRi-LN4F+ in a 2U chassis w/ 12x 3.5" bays. Got faster/bigger CPUs (E5-2680v2), more RAM, more drives, etc. Shifted container management to Docker Compose. Modded the BIOS to allow it to boot from a NVMe drive on a PCIe adapter.

    2.1 Shifted to ZFS on Debian. Other than DKMS occasionally losing its mind during kernel upgrades, this worked well.

    2.2 Forked [3] some [4] Packer/Ansible projects to suit my needs, made a VM for everything. NAS, Dev, Webserver, Docker host, etc. Other than outgrowing (IMO) MergerFS/SnapRAID, honestly at this point I could have easily stopped, and could to this day revert back to this setup. It was dead reliable and worked extremely well. IIRC I was also playing with Terraform at this time.

    2.3 Successfully broke into tech (Associate SRE) as a mid-career shift, due largely (according to the hiring manager) to what I had done with my homelab. Hooray for hobbies paying off.

    3. Got a single Dell R620. I think the idea was to install either pfSense or VyOS on it, but that never came to fruition. Networking was from a Unifi USG (their tiny router + firewall + switch) and 8-port switch, with some AC Pro APs.

    4. Got two more R620s. Kubernetes all the things. Each one runs Proxmox in a 3-node cluster with two VMs - a control plane, and worker.

    4.0.1 Perhaps worth noting here that I thoroughly tested my migration plan via spinning up some VMs in, IIRC, Digital Ocean that mimicked my home setup. I successfully ran it twice, which was good enough for me.

    4.1 Played with Ceph via Rook, but a. disliked (and still to this day) running storage for everything out of K8s b. kept getting clock skew between nodes. Someone on Reddit mentioned it was my low-power C-state settings, but since that was saving me something like ~50 watts/node, I didn't want to deal with the higher power/heat. I landed on Longhorn [5] for cluster storage (i.e. anything that wasn't being handled by the ZFS pool), which was fine, but slow. SATA SSDs (used Intel enterprise drives with PLP, if you're wondering) over GBe aren't super fast, but they should be able to exceed 30 MBps.

    4.1.1 Again, worth noting that I spent literally a week poring over every bit of Ceph documentation I could find, from the Red Hat stuff to random Wikis and blog posts. It's not something you just jump into, IMO, and most of the horror stories I read boiled down to "you didn't follow the recommended practices."

    5. Got a newer Supermicro, an X11SSH-F, thinking that it would save power consumption over the older dual-socket I had for the NAS. It turned out to not make a big difference. For some reason I don't recall, I had a second X9DRi-LN4F+ mobo, so I sold the other one with the faster CPUs, bought some cheaper CPUs for the other one, and bought more drives for it. It's now a backup target that boots up daily to ingest ZFS snapshots. I have 100% on-site backups for everything. Important things (i.e. anything that I can't get from a torrent) are also off-site.

    6. Got some Samsung PM863 NVMe SSDs mounted on PCIe adapters for the Dells, and set up Ceph, but this time handled by Proxmox. It's dead easy, and for whatever reason isn't troubled by the same clock skew issues as I had previously. Still in the process of shifting cluster storage from Longhorn, but I have been successfully using Ceph block storage as fast (1 GBe, anyway - a 10G switch is on the horizon) storage for databases.

    So specifically, you asked what I do with the hardware. What I do, as far as my family is concerned, is block ads and serve media. On a more useful level, I try things out related to my job, most recently database-related (I moved from SRE to DBRE a year ago). I have MySQL and Postgres running, and am constantly playing with them. Can you actually do a live buffer pool resize in MySQL? (yes) Is XFS actually faster than ext4 for large DROP TABLE operations? (yes, but not by much) Is it faster to shut down a MySQL server and roll back to a previous ZFS snapshot than to rollback a big transaction? (often yes, although obviously a full shutdown has its own problems) Does Postgres suffer from the same write performance issue as MySQL with random PKs like UUIDv4, despite not clustering by default? (yes, but not to the same extent - still enough to matter, and you should use UUIDv7 if you absolutely need them)

    I legitimately love this stuff. I could quite easily make do without a fancy enclosed rack and multiple servers, but I like them, so I have them. The fact that it tends to help my professional growth out at the same time is a bonus.

    [0]: https://github.com/trapexit/mergerfs

    [1]: https://www.snapraid.it

    [2]: https://wiki.servarr.com

    [3]: https://github.com/stephanGarland/packer-proxmox-templates

    [4]: https://github.com/stephanGarland/ansible-initial-server

    [5]: https://longhorn.io

  • Merge/Raid HDD documentation
    2 projects | /r/CasaOS | 29 Jun 2023
    it seems similar to mergerfs https://github.com/trapexit/mergerfs . I havent gone through any code to verify but this is what it seems like
  • Looking for a solution to merge storage accross WAN
    2 projects | /r/selfhosted | 28 May 2023
    I use mergerfs for my Google drive, Dropbox and local drives to appear as a single folder structure on my server so my plex doesn't require multiple mappings.
  • Thinking of switching from a 4 bay hardware RAID 5 to an 8 bay JBOD. Looking for opinions.
    3 projects | /r/HomeServer | 4 May 2023
    I myself prescribe to the teachings of the IronicBadger(Alex Kretzshmar) from the Self-Hosted podcast and (when I get one setup) intend to follow the guides on his site https://perfectmediaserver.com and use mergerfs to turn a JBOD to a single filesystem and use SnapRAID for redundancy.
  • an unknown system crash (Fedora 37) - 'timed out waiting for device'; 'dependency failed'; 'detected aborted journal'; 'remounting filesystem read-only'
    2 projects | /r/Fedora | 25 Apr 2023
    Note that I'm running merferfs 2.35.1 and Transmission daemon 4.0.2 (2a57b17031) (the latter was one release behind at the time of the error).
  • Does anyone know about Terry's bookmarks?
    2 projects | /r/TempleOS_Official | 10 Apr 2023
    going to https://www.templeos.org/wb/home/downloads/blog/bookmarks.html took me to a totally different site, https://spawn.link
    2 projects | /r/TempleOS_Official | 10 Apr 2023
    spawn.link its trapexits personal website. hes the guy who bought the domain after terrys death.
  • RockPro64 boot issues
    2 projects | /r/PINE64official | 1 Apr 2023
    Since I store static files shared via NFS and Samba I use snapraid run every night to make a parity file (backup of a backup haha), mergefs to simulate one big drive, and each drive is encrypted with luks. Not a great setup for a DB but could also partition and RAIDx just those. I can lose 1 disk with 0% loss but 2 disks and I loose only the files on those 2 disks (was less of a loss when I had 10 drives). I also can also have separate heads reading files if they happen to be on different drives.
  • How to design big 36-drive XFS array and setup Linux to report failures?
    3 projects | /r/DataHoarder | 1 Apr 2023
    Ah gotcha. I can understand building it yourself, it's the best way to learn IMO. Mergerfs merges multiple logical paths together. It looks like it's still maintained and was updated recently.

What are some alternatives?

When comparing snapraid and mergerfs you can also consider the following projects:

OpenMediaVault - openmediavault is the next generation network attached storage (NAS) solution based on Debian Linux. Thanks to the modular design of the framework it can be enhanced via plugins. openmediavault is primarily designed to be used in home environments or small home offices.

Greyhole - Greyhole uses Samba to create a storage pool of all your available hard drives, and allows you to create redundant copies of the files you store.

mergerfs-tools - Optional tools to help manage data in a mergerfs pool

Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]

chia-plotter-deployment - A Bunch of Scripts to setup a Chia Farm. Focusing on, but not limited to, the MadMax Plotter, and HPool.

cloudplow - Automatic rclone remote uploader, with support for multiple remote/folder pairings. UnionFS Cleaner functionality: Deletion of UnionFS whiteout files and their corresponding files on rclone remotes. Automatic remote syncer: Sync between different remotes via a Scaleway server instance, that is created and destroyed at every sync.

rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

docker-mergerfs - https://github.com/trapexit/mergerfs in docker

Elucidate - Elucidate: A GUI to drive the SnapRAID command line (via .Net)

dupeguru - Find duplicate files

MultiPar - Parchive tool