I’m thinking about building a separate NAS to hold backups of a few datasets from my main TrueNAS server. I’ve definitely gone far too long relying only on RAIDZ2 to keep protected…
I’m in two minds on how to proceed however. I could build another TrueNAS box and run a zfs pool just for backups. Which would give me the advantage of using zfs snapshots which I could just send/receive over my network. But most of my spare drives (which will end up being used for backups), are of very mixed sizes. And I’m already short on spare drive space compared to my primary pool. So that and the 20% capacity loss would mean I’d need to be really selective in what datasets I backup.
The other option which I’m leaning towards, is to instead build a backup server on OpenMediaVault and then just use mergerfs to create a giant array with no redundancy. I guess I’d then mount any datasets on my TrueNAS server which I want to backup over something like SMB, and run rsync on the backup server via some sort of routine.
Curious what others are doing and if anyone else has faced a similar situation/ trade-off when building a separate backup server?
I run another TrueNAS system with a RaidZ1 pool in it to receive my local backups, and I also have an offsite backup… similar approach.
It doesn’t run anything other than SSH so I can replicate to it.
In the past I did nasty things partitioning various drives so that I could maximize the space…
eg. say I had 2 2TB and 3 3TB drives, I could partition the 3s into 1 and 2TB partitions then have a 5-way RaidZ1 made of 2TB partitions… and a 3-way made of 1TB partitions, for a total of 10TB of storage.
This is nasty. But it does work. And I was happy to have the bitrot protection provided by RaidZ1, without the heavy redundancy as it was only one of my backups.
These days… I don’t have to do that… but you could if you want.
With your giant array with no redundancy, your backup would not be protected via checksums then unless you are careful to use the right tools. So, restoring data may end up corrupting data. Which is why my primary backup is to a zfs machine, for the most irreplaceable stuff. I have several other backups as well.
Believe it or not, my target for zfs is a VPS, which has unlimited expanding storage, I think it grows 1TB/year, Thus it is offsite and therefore immune at least to weather, theft, fire, etc.
I do also have a Kopia backup from Scale for example, and some others. Kopia uses drives as you are indicating, and mergerfs as you are saying. It’s just part of the strategy though. Kopia is incredibly quick too.
Kopia looks interesting. Also had never seen that. I see they have a Windows-based CLI on offer too. Which might give some more flexibility as to how I use that backup server (e.g. some of the data recovery software I’ve used in the past is Windows-only).
Does Kopia also handle drive pooling (like mergerfs)? Or, if I did want to run this in windows – do I need to use something like storage spaces or stablebit alongside it?
Any write-ups you’d recommend reading on configuring and integrating with an existing TrueNAS SCALE instance on a separate server?
Edit – I see Kopia doesn’t handle pooling, and not sure I like the idea of using proprietary Windows software alongside it. So, I think I’ll look into running Kopia on OMV with mergerfs.
No writeups I know of. I just read the doc and do it. I am running kopia as a custom app. Being a backup, that obviously means it needs permissions. It takes some container arguments and environment variables to get it to run. For the directories I care of, I hostpath them into /source/somedir in the container (as readonly). There nothing you need on the backup machine, just a filesystem, or you could use various cloud providers. I am just using sftp as the storage repository to a vps. I am doing the usual daily/weekly/monthly/yearly type retention. No Kopia doesn’t do mergerfs, but I just have mergerfs on the target machine so in essence it does.