Vdev expansion using a slightly smaller drive

Hi,

I have an existing RaidZ2 vdev with all the drives having the same vendor and the same model - Samsung SSD 15.3TB. I’m trying to expand this vdev but I only managed to find 15.3TB SSD drives from Kioxia, there is a ssd shortage on the Samsung side. The problem is that the Kioxia actual drive size is ~2GB smaller than the Samsung ssd size :slight_smile:

nvme6n1 259:12 0 15362991415296 0 disk
└─nvme6n1p1 259:24 0 15362990014464 0 part
nvme3n1 259:13 0 15362991415296 0 disk
└─nvme3n1p1 259:18 0 15362990014464 0 part
nvme11n1 259:22 0 15362991415296 0 disk
└─nvme11n1p1 259:26 0 15362990014464 0 part
nvme12n1 259:25 0 15360950534144 0 disk

The last one (nvme12n1) is the Kioxia drive, all the rest are Samsung drives.

Using the GUI I can’t expand the vdev, there is no candidate drive displayed in the Expand section. The Kioxia drive is detected and displayed as unused drive but it’s not listed as the available drive for the vdev expansion. I suppose the reason is its smaller size.

Is there a way to extend the vdev with the Kioxia drive ? Maybe some force option from cli ? It’s kind of silly to not be able to use it considering the very small size difference. I’m running Truenas Community 24.10.0.2.

Thanks.

These days, even the smallest differences in capacity can / will cause this problem.
In the past, TrueNAS was able to compensate this, by making use of some space of the automatically created 2 GB swap partitions, but these partitions are no longer created.

I’m not a big expert in this area, and i don’t know, if there is a workaround without loosing data…but I would probably choose to back up the data somewhere else, destroy the pool, and recreate it. When creating the new pool, the system should then orient itself to the drive with the smallest capacity.

By the way… a RAIDZ2 that is currently only 3-wide, and then later 4-wide, barely makes sense.
One could also consider using mirrors.

the existing RAIDZ2 has 11 drives in it, I’ve only pasted the last few drives to keep the drive size difference easy to spot.

Wow… impressive hardware…
A 12-wide Z2 VDEV is quite something. If these were HDDs, that would already be above the generally recommended limit because of the very long resilvering times. But i guess that doesn’t really apply in the same way for NVMe devices, though.
With the total capacity involved, it might of course be difficult to temporarily park the data somewhere else…

Looks like the array was created when TrueNAS SCALE applied no padding at all (it is now back, just not a swap partition). And unfortunately, the Kioxia drive is just 1.9 GB smaller.

nvme11n1   259:22 0 15362991415296 0 disk
nvme11n1p1 259:26 0 15362990014464 0 part
nvme12n1   259:25 0 15360950534144 0 disk

2 GB padding would have saved the day, but who could have predicted that removing this padding was a bad move? :roll_eyes:
There is no solution other than getting a larger drive, or backing up the whole array (HDDs…) to destroy and rebuild with the Kioxia drive in.

1 Like

Yes, these are the only ways.

While ZFS is great in many areas, it is not the most flexible or user friendly file system, volume manager and RAID manager.