Upgrading VDEV with Larger Disks

When I set up my “archive” VDEV as RAIDZ3 initially it had two 8TB disks and three 4TB disks. After moving most of my files off my old Sans Digital contraption, I repurposed the 8T drives, adding them to my VDEV. Now “archive” has eight 8 TB disks, but the Usable Capacity is only 23.07 TiB. Given five non-parity drives x 8 TB, I would have expected the usable capacity to approach 40 TB.

After all the replacement, expansion, resilvering, and scrubbing, I tried

zpool set autoexpand=on archive

although I read somewhere this is the default. However, the VDEV is still stuck at around 20 TB.

Is there some other step(s) I am missing?

FYI, I am running on a Supermicro H13SSL-NT, AMD 9015, 64 GB RAM, in a 45HomeLabs HL15 chassis documented in github com kolotyluk/home-lab

You should have 36.2TB you can use this rebalancing script I cant post links for some reason… Search markusressel zfs-inplace-rebalancing github.

https://github.com/markusressel/zfs-inplace-rebalancing

Thank you for this, both crashbun and markusressel.

Describing in detail the history of the pool. I can infer you have replaced drives and used raidz expansion while the pool was already filled. Both of these processes have issues in SCALE.
But 5*4 TB would give 20 TB, so it seems that autoexpand did succeed. (sudo lsblk can confirm)

So this must be purely the vdev expansion thing
Good news: Capacity is there, just not properly reported.
Bad news: The fundamental issue lies upstream in OpenZFS, and is unlikely to be addressed any time soon. The only possible fix is… “backup-destroy-restore”.

What you perhaps are not aware of is that - unlike with RAID expansion (on other file systems) - ZFS VDEV expansion does not rebuild the parity blocks, but redistributes the old parity blocks across the old disks and the new disk. That means your existing data will still maintain the original ratio of data blocks to parity blocks, while any new data will utilize the new (better) ratio.

Example:

  • RAIDz1 with 3 drives. You ratio of data blocks to parity blocks is 2:1.
  • If you add a 4th drive, the ratio of data blocks to parity blocks for your existing data blocks will remain 2:1. Any newly created data blocks will have a data to parity ratio of 3:1.

The way I understand these rebalancing scripts to work is that they’re recursively copying each of your files onto a temporary file (with block-cloning disabled), thus creating new data blocks with the new data to parity ratio, deleting the original file and then renaming back the temporary file to the original file name.
(Apparently block-cloning is a newer ZFS feature, so very old scripts may not account for that and thus won’t work as intended. Though I’m not sure, if block-cloning is enabled by default in TrueNAS Scale anyway.)

AFAIK the one downside of this procedure is that - if you use snapshots - the old data and parity blocks will not be released, until the snapshots themselves expire or are deleted. So if you do not have enough disk space to hold two copies of your data, do the rebalancing piecemeal and delete snapshots in between.

Yes you should have freespace for this O think he mentioned 20% of the pool size should be fine. Also if you are changing hdds eith bigger size you should just remove a disk or 2 repair the pool resilver and keep goong done it the other week with 5x10tb going to 5x16tb worked perfectly.

Rebalancing recovers some space that was otherwise lost to retaining the previous data:parity ratio.
But free space reporting is botched because it still uses the data:parity of the pool at creation, so here a very, very, unfavourable 2:3. This will NOT be addressed by rebalancing.

1 Like

I switched to scale cause I can’t expand my 5 raidz2 to 6 hdd raidz2 without deleting everything I will try the new eel beta see how it works and will report if it doesnt work . Was hoping the rebalancing thing works there. Because I’ve changed my 5x 10tb to 5x 16tb on truenas core . And I have all the space that I should hpoing I will after expanding

Oh wow, I hadn’t realized that. So the questions I have as a newbie then:

  • How can I best obtain the true picture of space utilization, data to parity ratio, etc.?

  • Does the TrueNAS Scale UI present the “true picture” of these things or is the UI’s data also thrown off by a raidz expansion? If the latter, that might be worth a feature request, perhaps.

You should be able to see the actual capacity by running sudo zpool list.

Thanks for all the replies… I am still absorbing this and reflecting on it…

For now, things are stable, so I have time to ‘measure twice,’ before my next cut.

truenas_admin@truenas[~]$ sudo zpool list
[sudo] password for truenas_admin: 
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
archive    58.2T  32.6T  25.7T        -         -     0%    55%  1.00x    ONLINE  /mnt
boot-pool  7.27T  2.67G  7.26T        -         -     0%     0%  1.00x    ONLINE  -
scratch    1.81T  1.37T   455G        -         -     1%    75%  1.00x    ONLINE  /mnt

:partying_face:

Thanks @Protopia

yup - your archive pool has 8x8TB in total = 64TB = 64 x 10^12 bytes.

58.2TiB is 58.2 x 2^40 = c. 63,991,576,736,563 or about 64 x 10^12.

zpool list reports based on blocks - total, used, free and is accurate.

Almost all of the TrueNAS UI figures are based on zfs stats (rather than zpool stats) which are based on the size of files and take less account of:

  • Compression
  • Redundancy data (mirror / raidz)
  • Block cloning
  • etc.
  • A bug in RAIDZ expansion.