Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I like duplicacy because of the way it keeps the chunks in the file system, without a special database. This makes it scale up really well no matter how many backups you have (you can even have multiple computers saved). It's kind of beyond weird how you select what you want to backup with the symlinks (using the command line version), looks more like what one would make for himself in a weekend (not that I'm complaining about free software!) but it's been without bugs for me and extremely efficient. In contrast duplicati has a perfect interface, it's well maintained and everything but bogs down in any large backup, has stories about people recovering for weeks for a very few local TBs and I've experienced for myself this, granted in the python that is checking the sha256 checksums of the backups but it makes it slower many times (possibly hundreds of times), nobody checked from 2013 to 2021 (or did it on tiny datasets like 1GB or was content to wait for weeks even on something small-ish)?