Boot Drives not using all of SSD disk space

Hi,

I was using mirrored 16gb usb as my boot pool for a decade (don’t judge - lol), and a couple weeks ago before upgrading to Electric Eel, I did the following:

  1. turn off NAS
  2. install 2 SSD
  3. NAS on
  4. used GUI to detach one USB from boot pool
  5. add one new SSD to boot pool
  6. reboot
  7. detach second USB from boot pool
  8. add second SSD to boot pool

Since that initial OS upgrade, I’ve now current with 25.04.1
NAS boots and works fine.

The problem: Just got a notification that the boot pool was >80% of drive capacity. New SSD for boot pool are 256gb. I’d read that if there are two drives of different size3s, the boot pool will only reflect the size of the smallest drive. The warning message would make sense if there was a 16gb boot pool.

Is there a setting to change / clear to set TrueNAS to get full size of boot pool recognized by TrueNAS?

TIA

We need to start by confirming the drive partitioning by doing the following.

  1. Determine which devices are the boot zpool (Systems → Boot → Boot Pool Status, expand the disk to see the device)
  2. Open Shell (either Systems → Shell or SSH in)
  3. sudo /sbin/fdisk /dev/<boot device>
  4. p (print partitions)
  5. F (list free space)
  6. q (quit without saving)
  7. Copy paste the results, format using Preformatted text (enclose with 3 back quotes on their own line)

Results should look like this:

admin@KrausHaus[~]$ sudo /sbin/fdisk  /dev/sdw

Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.


Command (m for help): p

Disk /dev/sdw: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Samsung SSD 850 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 16776704 bytes
Disklabel type: gpt
Disk identifier: 2F979580-F54C-4EE9-9813-2CEA0607EA63

Device        Start       End   Sectors   Size Type
/dev/sdw1      4096      6143      2048     1M BIOS boot
/dev/sdw2      6144   1054719   1048576   512M EFI System
/dev/sdw3  34609152 976773134 942163983 449.3G Solaris /usr & Apple ZFS
/dev/sdw4   1054720  34609151  33554432    16G Linux swap

Partition table entries are not in disk order.

Command (m for help): F
Unpartitioned space /dev/sdw: 0 B, 0 bytes, 0 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

Command (m for help): q

admin@KrausHaus[~]$

Not going to lie… finding the path to the boot pool is definitely beyond my pay grade. Found the following though…

Disks in the boot pool
sdb2
sdc2

zpool list provides

NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-boot      14.4G  7.36G  7.02G        -      224G      -    51%  1.00x    ONLINE  -
tank-tank      8.16T  4.20T  3.95T        -         -    29%    51%  1.01x    ONLINE  /mnt

Key takeaway for me is there’s 224G of expandable space in the boot pool.

Would I be correct taking this as a next step?
zpool set autoexpand=on boot-boot

Using “zpool get autoexpand boot-boot”

NAME          PROPERTY    VALUE   SOURCE
boot-boot     autoexpand  off     default

Also looks like I need to offline and online each device in the boot pool one at a time using “zpool offline” and “zpool online”?

Thanks for the help so far

Don’t rely on the autoexpand feature, it doesn’t work.
Press the Expand button in the UI instead.

To clarify, in the GUI the Expand Button is in the Storage Dashboard and looks like it might be connected to the datasets v/s the boot pool… When I click the Expand Button, the popup box says “Expand pool to fit all available disk space.”

This doesn’t appear to be focused on expanding the boot pool

It isn’t clarifying if you’re giving incorrect information. You’re right that it isn’t connected to the boot pool (who cares?), but it also isn’t connected to any datasets. It’s instead connected to ever other storage pool, since iX foolishly decided to break pool auto-expansion.

Almost right. Once you set autoexpand = on, then you just need to zpool online -e <zpool> <device>, no need to offline the device first. Once all devices in a mirror have been expanded then the space will be available in the zpool.

And remember to set autoexopand = off so that the zpool configuration matches what the TrueNAS software is expecting.

2 Likes

AFAIK autoexpand works fine, it is just disabled by default.

The EXPAND buttons on the STORAGE page are for individual data zpools.

The boot-pool is no longer managed like any other data zpools. You get to it via System → Boot → Boot Pool Status, but you cannot Expand from there.

Not quite. The autoexpand property still defaults to on, but for whatever nonsensical reason, iX has decided to partition replacement drives the same size as the drives they’re replacing, regardless of how big the replacements may be.

2 Likes

Thanks all, fixed.

Ran this in shell
zpool get autoexpand boot-boot

Then for good measure ran
zpool online -e boot-boot <DEVICE_NAME>
for both of the ssds in the boot pool.

A quick “zpool list” confirms the full disk size for the boot pool now.

Thank you all for your assistance

1 Like

Just personal opinion on the matter, yes but I can think of good reason. There are a whole host of considerations just to name a few.

  • Block sizes
  • TiB vs TB definitions
  • Flash storage whether it be thumb drives or SSDs all are a “bit differant”. The manufacturers can claim they are 480G but every vendor has a slightly differant amount of blocks because they all measure differantly.
  • These types of issues do seem to be getting better somewhat over time but it’s still a real issue IMHO.

Unexpected - had the same issue & same steps resolved it for me. Thanks folks!