I have a pool 1 x RAIDZ1 | 4 wide | 3.64 TiB, mostly it’s for media content.
In near future I want to replace these drives with 4x 14TB drives also RAIDZ1.
What is the best way to do this?
Option a)
I copy what data is dear to me to one of the new 14TB drives.
I remove old drives, replace new drives (only 3).
I setup the pool RAIDZ1 with 3 drives.
Copy data from the one 14TB drive to the pool.
I format the 14TB drive and add it to the pool (now 4 drives).
Option b (requires an hba or equivalent). Throw new drives in same box on same controller. Build new pool. Use zfs send to copy all your data to your new big pool. You can fire and forget with this approach.
Option C, takes forever but proven. Upgrade the existing pool one drive at a time. Offline an old disk, replace with big disk, resilver to new drive. You won’t see total new pool capacity until this process is done for all drives.
So far what you guys suggested, option a) seems to be the best and fastest way for me to do.
Even If I lose all data, no harm done, but still… I have around 8TB of media data I don’t want to download again.
Copy from existing pool to a 14TB drive should take few minutes, setting up new pool also few minutes,
Copy back from 14TB drive to the new pool few minutes, and resilver of the 14TB drive to the other 3 drives already in the new pool will take the longest.
Sure. because if your raidz1 only allow 1 drive to fail. During resilver > 8TB it will take some time - during the time another drive could fail easily - and your pool would be gone. - that why you need to have good backups.
Never did a resilver process, so I have no clue how long it takes to resilver. I have heard it takes a long time, but what would that time be cca with 14TB drives?
Also: For backups I use the 3-2-1 approach, so backups are good.
Depends on how full they are - if I had to place a rough estimate then ~1.5 hours per TB written. So if your new 14tb drive has to receive 10tb during a resilver, I think apprx 15hours is near the ballpark of correct.
I just realized it may also depend on how fragmented the files are, but I still think I’m near correct estimates based on 8TB drive experience.
Reread your op. Looks like A is the quick choice. New drive, format to whatever, copy stuff from the array to it. Build new array 3 way z1, copy data from new drive to new array.
Of course the last step is wiping the new drive later and adding it to the z1 to grow it to 4 way. It wasn’t possible before, but I think it is now.
If you have a proper grandfather father child backup process, nothing wrong with big drives in z1. It’s risky but only if you rely on those drives for too many years or give them a hard life. I have a 5 way z1 with 18tb drives. I know the chance I’m taking for all that max storage. Not as bad as rai0 or jbod.
I’m just reporting that the transition was smooth.
I went to the Storage section and clicked the “Export/Disconnect” button.
Checked “Delete saved configurations from TrueNAS?” and “Confirm Export/Disconnect *” and confirmed.
After the process was done, I shut down the server, replaced the drives, powered the server on.
In TrueNAS Scale I again created the pool with the same name as the previous one, recreated the datasets with same names.
Copied everything I needed back to the according folders and started all container apps.
It took me around one hour for this to complete (around 400GB of data to recover/copy).
The only issues I had was setting up the shares (ACL is a nightmare for me, I just don’t get it).