Slow upgrade of zpool disks?

Hi all, I wasn’t sure how to search for this topic, so apologies if it has been covered.

I was wondering, as I am hovering below the recommended 80% utilization level (deleting superfluous data regularly), I am considering my upgrade strategy. Currently have 4 5TB cudas configured in a Z1 and was wondering if it is feasible to upgrade each drive 1 at a time with Exos 6 or 8 TB drives? Resilver, repeat? This would allow me to spread the expense out over time rather than have to take the body blow of $1k all at once. My understanding is that ZFS does not have a drive uniformity requirement?

So, 1) is this even possible, and 2) if so are there best principles to work from (and what are they)?

Thanks!

Yes - that would work fine.

Drives have to be the same size OR larger. Heads-up, until you’ve replaced ALL the drives for larger ones, you’re stuck at the size of the smallest one. ie: you have 5 drives raidz1, one 4tb drive, and four 8tb drives: congratulations, you only have 16TB of usable space (1 drive redundancy) until you swap that last 4tb drive for an 8tb drive - after you get all drives at 8tb you’ll have 32tb of usable space (1 drive redundancy).

100% possible - what you said works fine. Just make sure you test & burn in new drives as you acquire them prior to deployment.

1 Like

Upgrading over time is fine. Raid-Z1 with a 4 drive VDEV is a bit risky so if you have a spare sata port you should power off, add new, Exos. Power back up and you replace a drive in place. The data gets moved from ‘drive 4’ to the new drive.

If you don’t have a spare sata port you are stuck removing ‘drive 4’, making a degraded pool and then adding in your Exos.

You would repeat this over time until all drives have been replaced.

Testing new Exos as recommended above is important. Burn in test.

3 Likes

I also recommend this, if possible.

If you are using Seagate Barracudas, they are likely SMR HDDs. A poor choice for use with ZFS. Replacing them, (one at a time), with Exos is a good way to get rid of the SMR problem.

Their are 2 problems with SMR in general. And a third with Western Digital’s Red SMRs, which you don’t seem to have.

  1. Write speed is reduced because of the shingled track’s need to re-write the following track(s).
  2. Read speed is reduced over time because of all the fragmentation introduced by the shingle allocations not being linear.

The problem is worse IF you need / want to use a SMR HDD as replacement disk. Even though a SMR HDD starts off as un-fragmented, ZFS’ constant update of critical metadata, (and possibly regular metadata), during a re-silver will introduce tons of SMR specific fragmentation.

Replacing a SMR HDD with a CMR / Regular HDD can still be a problem. If the source disks are SMR, item #2 comes into play. Reduced read speed affecting the re-silver. Though not as bad as attempting to use a SMR drive as a replacement, it is still noticeable.

Sorry for the long explanation. You may know some or all of this. But, just in case you don’t or someone else in the future reads this, the explanation is here.

3 Likes

Excellent news, thank you all! I don’t have a problem waiting to realize the additional space. Very happy I can do this at all. I do have 1 extra sata port, so I’ll certainly take that advice. And yes definitely burn n test.

Thanks again.

EP

1 Like

Make sure you double check the SN# when you go to unplug drives. TrueNAS tends to randomize your expected sda/sba/sdc/whatever values when you reboot.

What you thought was ‘sda’ when you replace, might not be the same drive when you’re ready to take it out. Always confirm the SN#!

2 Likes