I have a very simple idea here and i just want to go over if this will work before i implement it.
I have a couple of terabytes of archive data and it grows for about 500GB a year.
Currently this is all on a rather new 18TB toshiba and another two 3TB toshibas for backup, all in NTFS format.
I don”t need speed, i don”t need auto-repairing with a raid field.
But i want checksum/data integrity checks and bitrot protection.
So this is the idea:
I build a small home server with an SSD that runs TrueNAS (suggest me which version for my needs please).
That server only contains the 18TB toshiba formatted in ZFS.
The archive data is dumped to it. When i need to acess that data from my macbook i turn the server on and if i mounted the pool well it will be network acessable.
Now:
The cold backup is in an icybox external case in the NTFS format and the two 3TB toshibas go to the basement (they will be the third backup for the most important stuff).
Once every couple of months i run a scrub test on the single toshiba in the small server just to check if there are bitrot errors.
If there are, i check the data manually and recover it from the external NTFS backup drive.
Will this work okay? I really really don”t need the auto recovery options, i just want the warning about bitrot or corruption so that i can recover manually.
Also, if i”m working this way, do i eliminate the dangers of not having ECC memory considering that if things go south my backups are offline anyway?
Is there a possibility that ZFS screws up the data on the main drive because i don”t have ECC memory without letting me know about that and then i acidentally overwrite the backups with the corrupted data?
Thanks!