Available space didn't increase after expansion

Hello!

I have a pool previously consisting of four mirrored vdevs of the following capacities:

2x6TB
2x6TB
2x10TB
2x4TB

This gave me according to TrueNAS 23.48TB usable space for datasets. I’ve created a dataset and expanded and filled up to the brink (21.74TB) with only 1.74TB free in the pool.

So I decided to replace one mirror with 10TB drives. I started replacing the 4TB drives I got through the UI, resilvered done and everything is back and normal with the following layout:

2x6TB
2x6TB
2x10TB
2x10TB

However usable space has not increased. I still have 1.74TB free where I should have at least 4-5TB more. Why is this happening?

I used to replace disks like this in TrueNAS Core with success, but this time I am running TrueNAS Scale and get this issue. What have I missed? Shouldn’t I get the extra space as soon as resilvering of the second vdev member was done? Both disks of that particular vdev has been replaced with larger drives (+6TB per drive).

What gives?

Probably because there (apparently) continues to be a bug in SCALE when it comes to partitioning replacement disks. See:

to address in the short term, and log a bug to see if iX can finally fix it.

2 Likes

Thank you, it’s kind of worrying that they let slip this bug…

When I am looking at my drives there’s something that catches my eye:


NAME     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda        8:0    0   9.1T  0 disk
├─sda1     8:1    0     2G  0 part
└─sda2     8:2    0   9.1T  0 part
sdb        8:16   0   9.1T  0 disk
└─sdb1     8:17   0   3.6T  0 part
sdc        8:32   0   9.1T  0 disk
└─sdc1     8:33   0   3.6T  0 part
sdd        8:48   0   9.1T  0 disk
├─sdd1     8:49   0     2G  0 part
└─sdd2     8:50   0   9.1T  0 part
sde        8:64   0 238.5G  0 disk
└─sde1     8:65   0 238.5G  0 part
sdf        8:80   0   5.5T  0 disk
├─sdf1     8:81   0     2G  0 part
└─sdf2     8:82   0   5.5T  0 part
sdg        8:96   0 111.8G  0 disk
├─sdg1     8:97   0     1M  0 part
├─sdg2     8:98   0   512M  0 part
├─sdg3     8:99   0  95.3G  0 part
└─sdg4     8:100  0    16G  0 part
  └─sdg4 253:0    0    16G  0 crypt
sdh        8:112  0   5.5T  0 disk
├─sdh1     8:113  0     2G  0 part
└─sdh2     8:114  0   5.5T  0 part
sdi        8:128  0   5.5T  0 disk
├─sdi1     8:129  0     2G  0 part
└─sdi2     8:130  0   5.5T  0 part
sdj        8:144  0   5.5T  0 disk
├─sdj1     8:145  0     2G  0 part
└─sdj2     8:146  0   5.5T  0 part

sdb and sdc are the new drives, but they only have one partition, whereas the other vdevs has two. Is this a problem? Instructions say that I should expand the largest partition (of the two), but I only got one partition on both drives…

No, it seems SCALE has stopped creating the swap partition. That isn’t a problem.

If there’s only one, expand the one.

Alright, i did a

parted /dev/sdb resizepart 1 100%

and the same for sdc and now TrueNAS shows I got 7.21TB free.

It’s a bit scary though that this bug has persisted since January, they should really prioritize these critical and basic functions of a NAS product. Especially since they use TrueNAS with their own enterprise hardware…

Thanks for the quick reply, I will now do a proper scrub as well just to make sure everything is where it should be.

1 Like

They said they’d fixed it–apparently not. You are running 24.04.2, right?

My guess is a regression introduced by “disabling swap” in 24.04.2, which apparently means removing the dummy padding partition too.

This is going to cause headaches.

Which if true, means that a release was made without even testing VDev expansion.

1 Like

I wasn’t running 24.04.2, but I’ve now updated. It wasn’t mentioned in any patch notes when I checked, so I hope they fixed it for good now. Won’t replace any more drives for a few weeks.

If it persists in .2, please file a ticket on it.

I can confirm that the bug still persists in Dragonfish-24.04.2. Replaced two more 4TB drives with new 10TB drives without any additional space gained.

Annoying…

In that case:

1 Like

I thought the bug was already fixed?

Ticket from November 2023.

Edit: I see @Stux’s reply above. I had forgotten it was a “minor” bump in version where the swap was disabled.

Thank you, bug has been reported in NAS-131084.