I have a pool previously consisting of four mirrored vdevs of the following capacities:
2x6TB
2x6TB
2x10TB
2x4TB
This gave me according to TrueNAS 23.48TB usable space for datasets. I’ve created a dataset and expanded and filled up to the brink (21.74TB) with only 1.74TB free in the pool.
So I decided to replace one mirror with 10TB drives. I started replacing the 4TB drives I got through the UI, resilvered done and everything is back and normal with the following layout:
2x6TB
2x6TB
2x10TB
2x10TB
However usable space has not increased. I still have 1.74TB free where I should have at least 4-5TB more. Why is this happening?
I used to replace disks like this in TrueNAS Core with success, but this time I am running TrueNAS Scale and get this issue. What have I missed? Shouldn’t I get the extra space as soon as resilvering of the second vdev member was done? Both disks of that particular vdev has been replaced with larger drives (+6TB per drive).
Thank you, it’s kind of worrying that they let slip this bug…
When I am looking at my drives there’s something that catches my eye:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 9.1T 0 disk
├─sda1 8:1 0 2G 0 part
└─sda2 8:2 0 9.1T 0 part
sdb 8:16 0 9.1T 0 disk
└─sdb1 8:17 0 3.6T 0 part
sdc 8:32 0 9.1T 0 disk
└─sdc1 8:33 0 3.6T 0 part
sdd 8:48 0 9.1T 0 disk
├─sdd1 8:49 0 2G 0 part
└─sdd2 8:50 0 9.1T 0 part
sde 8:64 0 238.5G 0 disk
└─sde1 8:65 0 238.5G 0 part
sdf 8:80 0 5.5T 0 disk
├─sdf1 8:81 0 2G 0 part
└─sdf2 8:82 0 5.5T 0 part
sdg 8:96 0 111.8G 0 disk
├─sdg1 8:97 0 1M 0 part
├─sdg2 8:98 0 512M 0 part
├─sdg3 8:99 0 95.3G 0 part
└─sdg4 8:100 0 16G 0 part
└─sdg4 253:0 0 16G 0 crypt
sdh 8:112 0 5.5T 0 disk
├─sdh1 8:113 0 2G 0 part
└─sdh2 8:114 0 5.5T 0 part
sdi 8:128 0 5.5T 0 disk
├─sdi1 8:129 0 2G 0 part
└─sdi2 8:130 0 5.5T 0 part
sdj 8:144 0 5.5T 0 disk
├─sdj1 8:145 0 2G 0 part
└─sdj2 8:146 0 5.5T 0 part
sdb and sdc are the new drives, but they only have one partition, whereas the other vdevs has two. Is this a problem? Instructions say that I should expand the largest partition (of the two), but I only got one partition on both drives…
and the same for sdc and now TrueNAS shows I got 7.21TB free.
It’s a bit scary though that this bug has persisted since January, they should really prioritize these critical and basic functions of a NAS product. Especially since they use TrueNAS with their own enterprise hardware…
Thanks for the quick reply, I will now do a proper scrub as well just to make sure everything is where it should be.
I wasn’t running 24.04.2, but I’ve now updated. It wasn’t mentioned in any patch notes when I checked, so I hope they fixed it for good now. Won’t replace any more drives for a few weeks.
I can confirm that the bug still persists in Dragonfish-24.04.2. Replaced two more 4TB drives with new 10TB drives without any additional space gained.