Hello, i have a video dataset with TV recordings that is several TB large. I recently found the idea very appealing to use the zfs send/recv of snapshots for backups as i can skip the rsync step and still have a frozen valid state.
However, the size of the folder is exceeding the backup drives’ size.
I assume i have to create separate datasets and move manually the files in order to recompile the files into a heap that can be saved per disk? There is no option to split the dataset automatically?
Does anyone know a script that you can give a size and it will automatically create subfolders and mv files to honor the disk limit?
Any other suggestions?
Per disk?
If the backup target is ZFS (which I assume it is, since you mentioned zfs send/recv
), why not combine the disks into a single large ZFS pool to hold your backups? Hopefully with some redundancy, such as a mirror or RAIDZ1.
2 Likes
that’s a good idea but given the amount of money i spend for the NAS and its disks and the same amount of disks for backup, i would like to avoid sacrificing any backup disk for simple redundancy (and still be dependant on a pool working).
I instead would like to use each backup disk separately, thus the question (the advantage here: -no additional loss of storage, -each disk is on its own thus -restore is more easy, i.e. on a USB dock).
Can you explain your setup more?
Where are these disks being installed? On the same server? A separate server? Only via a USB bay?
the source system is a DIY TrueNAS Scale with Xeon D 2148 and 64GB, 10Gbps and Broadcom 12Gbps SAS HBA in IT-Mode for 10 SATA spinners. Boot runs on SATA SSD.
Backup disks can be connected either via USB locally or into a separate unRAID system with 10Gbps and SATA hot swappable drive bays.