I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am kinda curious about that potentially for offsite backups in a safe deposit box or something.
TLDR: title.
Edit: You have mentioned borg and rsync, and while borg looks good, I want to go with rsync as it seems to be more actively maintained. I would like to also have my backups encrypted, but rsync doesn’t seem to have that built in. Does anyone know what to do for encrypted backups?
As long as you understand that simply syncing files does not protect against accidental or malicious data loss like incremental backups do.
I also hope you’re not using
--delete
because I’ve heard plenty of horror stories about the source dir becoming unmounted and rsync happily erasing everything on the target.I used to use rsync for years, thinking just like you, that having plain old files beats having them in fancy obscure formats. I’m switching to Borg nowadays btw, but that’s my choice, you gotta make yours.
rsync can work incrementally, it just takes a bit more fiddling. Here’s what I did. First of all, no automatic
--delete
. I did run it every once in a while but only manually. The sync setup was:- Nightly sync source into nightly dir.
- Weekly sync nightly dir into weekly dir.
- Monthly tarball the weekly dir into monthly dir.
It’s not bad but limited in certain ways, and of course you need lots of space for backups — or you have to pick and choose what you backup.
Borg can’t really get around the space for backups requirement, but it’s always incremental and between compression and deduplication can save you a ton of space.
Borg also has built-in backup checking and recovery parity which rsync doesn’t, you’d have to figure out your own manual solution like par2 checksums (and those take up space too).
Re needing lots of space: you can use --link-dest to make a new directory with hard links to unchanged files in a previous backup. So you end up with de-duplicated incremental backups. But borg handles all that transparently, with rsync you need to carefully plan relative target directory paths to get it to work correctly.
Yeah Borg will see the duplicate chunks even if you move files around.
As long as you understand that simply syncing files does not protect against accidental or malicious data loss like incremental backups do.
Can you show me a scenario? I don’t understand how incremental backups cover malicious data loss cases
Let’s say you’re syncing your personal files into another location once a day.
On Monday you delete files. On Tuesday you edit a file. On Wednesday you maybe get some malware that (unknown to you) encrypts some files (or all of them).
A week later you realize that things went wrong and you want the deleted files back, or the old versions of the file you edited, and of course you’d want back the files that the ransomware has encrypted.
If you simply sync files you have no way to get back deleted files. Every day it synced whatever was in there, overwriting what was there before. If you also sync deletions then sync deletes the files. If you don’t sync deletions then files keep piling up when you delete them or you move them around.
An incremental backup system like borg looks at small file chunks, not at files. Whenever a file changes, it makes a copy of only the chunks in it that changed. That way it can give you the latest version of the file but also all the versions before, and it doesn’t store the same file over and over, only the chunks that really changed, and only one of each chunk. If you move a file to another folder it still has the same chunks so borg stores that it moved but it doesn’t store the chunks twice. Also if several files have identical chunks, those chunks are only stored once each. And of course it never deletes files unless you explicitly tell it to.
Borg can give you perfect recall of all past versions of every file, and can do it in a way that saves tremendous amounts of space (between avoiding the duplication of chunks and compression).