Adding drives to boot pool - why is the boot pool a single stripe/mirror vdev?

So this might be a bit of a weird question.

I’m running a mirrored boot pool. An old Samsung 840 (60% health, it’s old and used to be the system drive in my main PC for a good 10 years), and a new WD blue SN510. Both 250GB drives.

I have recently acquired several used 120GB SSDs, and in this time of disk price inflation, I was hoping to swap the larger, more useful 250GB drives for the smaller drives.

Obviously, I cannot simply add a 120GB drive to the pool and remove the 250GB drives: it is too small.

But why can’t I add a pair of 120GB drives as a second mirror then remove the original mirror, like in every other pool that is made purely of mirrors? Why is the boot pool stuck as a single vdev? Or am I being incredible dense (it would not be the first time)

For context, the total amount of storage on the boot pool is a single boot environment for 25.04.2.6, at 3.02GiB.

As far as ZFS is concerned, you could, so this could be done at the shell. But what that wouldn’t do is take care of the boot loader. And I don’t know how well grub would do with such a pool, even if you installed it on the other devices.

Other than weirdness with the boot loader, I think the answer is “because there’s no reason for it to have more than one vdev.”[1] In odd situations like yours, do a clean install, upload a saved config file, and you’re good to go.


  1. Obviously yours is a situation where it’d be helpful, but I’m sure you realize it’s a niche case. ↩︎

1 Like

Thanks Dan. I figured all that as the way forward, but it does involve taking a system offline, which I would have thought was a shortcoming in enterprise software.

Fair, but again, it’s a pretty niche situation.

I certainly wouldn’t be opposed to the system including more robust boot pool management, though. But whether grub can boot from a pool that’s had a vdev removed is something I don’t know.

I’m sorry, I should have acknowledged: yes this is niche. I was just wondering if this was one of those ideas for the devs that was asked about the other month in relation to managing drive costs. Less of a moan, more of an ask for confirmation and suggestion of how to improve it.

If Grub can’t handle it then fair enough.

If the system can never go offline then you need a HA system otherwise downloading the config file and doing a reinstall on the new drives and then uploading the config is the best way.

Perhaps you could do this as part of a planned update when you next decide to update your system?

FWIW I’ve already done this (installed truenas on the new disks on old PC, installed in server and uploaded new config). Took 30 minutes, whilst doing other stuff.

Turning the server off wasn’t an issue for me, but thanks for the advice :slight_smile:

I played around with a mirrored boot pool some while ago to evaluate the benefit of having one.

From the point of view of ZFS, it is not different from any other pool in the system.

However, the challenges I had was related to the BIOS/UEFI implementation.

By default, you need to select which device to boot from, and for that you need to select one or both of the disk as boot device. However, because the BIOS/UEFI isn’t aware of ZFS and because you can’t make use of RAID1 for the boot disks without interfering with ZFS, as soon as you are trying to boot from a faulted drive as the primary boot device, the boot can/will fail.

If the primary boot drive is removed prior to booting and assuming the second drive is assigned as the second boot drive, then booting will not be an issue.

If the boot drives have the SATA ports set with Hot-swap capability enabled, then you should be able swap a drive while under power and ZFS should take care or resilvering the pool. I don’t know if on the next reboot the BIOS/UEFI needs to be adjusted, but you also won’t be able to boot until the resilvering has competed and the new drive is still the primary boot drive.

1 Like

There is a possible solution:

A few wrinkles with temporarily having 2 Mirror vDevs in the boot-pool, then removing the old one.

The ZFS feature feature@device_removal needs to be enabled. In general, enabling extra ZFS features on the boot-pool is discouraged because Grub MUST support such features, or it will prevent booting.

Further, even if Grub does support feature@device_removal, the methodology used to remove a Stripe or Mirror vDev is clumsy. It creates a virtual vDev with the data on the devices removed. This virtual vDev uses memory for data pointers, thus a memory impact unrelated to data pools.

All in all, far better to backup configuration, reload TrueNAS and restore configuration.

You will have to deal with the boot loader but seems to be straight forward. I’ve played with this myself as i currently have a single partitioned SSD to hold boot-pool and ssd-pool and want to get that mirrored.

How to install TrueNAS SCALE 25.10 on a partition instead of the full disk + mirror boot and data partition at a later stage

I encourage to peform a backup before testing with production data.