Boot pool block size

Hi All,
I have a boot pool consisting of 2 mirrored SSD drives (128GB) one had died and was replaced in warranty with 500GB SSD. I had “replaced” the drive it resilvered no issue but I am getting performance degradation warning “One or more devices are configured to use a non-native block size. Expect reduced performance.” Does that mean I need to get different brand 128GB drive (PNY does not make them anymore) or it is going to be ok?

pool: freenas-boot
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: resilvered 51.1G in 00:04:08 with 0 errors on Fri Aug 30 10:01:34 2024
config:

NAME          STATE     READ WRITE CKSUM
freenas-boot  ONLINE       0     0     0
  mirror-0    ONLINE       0     0     0
    sdb2      ONLINE       0     0     0
    sdg3      ONLINE       0     0     0  block size: 512B configured, 4096B native

I think your original pool was configured with 512B blocks. Not sure if there is anyway to fix.

Perhaps don’t mirror the boot pool and just be ready to restore with saved config.

1 Like

that config goes back to USB stick from early freenas and it got moved to ssd later on …

Removed 500G drive put in 120G drive (same as original) - mirror resilvered without any complains.

Spoke too soon on problem resolution … had to re boot the system and on bootup got:

error:symbol ‘grub_disk_native_sectors’ not found.
Entering rescue mode.

I was able to bootup by switching order of disks (swapping cables) but this is not a solution for and defeats the usage of mirroring. How do I update GRUB so system would boot from ether one of my boot disks?

As suggested earlier, the old disk is probably formatted 512b while the new one is 4kn native. It isn’t ‘optimal’, but for the purpose of a boot pool it isn’t something to worry about. My recommendation is to just ignore it.

1 Like

Either ignore it or nuke it and do a clean reinstall.

As long as it works reliably, the boot pool is fine.

Btw, you should be able to check with zfs get recordsize.

1 Like

SO I should make a clean backup of config, make USB boot of latest SCALE, boot on it , install to my current SSD boot disk, make it a mirror pool, reboot, install old config tarball and that should do the trick?
Currently I can boot just from one of the mirrored disks …

:point_up:t2:

You should always have an up-to-date config backup. This can easily be achieved by running @joeschmuck’s Multi-Report.

The message is related to the physical sector size. Older SSDs were 512b and newer ones are 512e/4kn. Right now the OS is using 512 native for the old SSD, and 512 emulated for the new disk. The message is just telling you that it’s not optimal for the new SSD.

Record size is different and typically spans many blocks. I believe the default is 128kb, you definitely wouldn’t want to set it to 512b, and you’d still get the same message.

A clean reinstall won’t make the message go away. You will need to determine how to format your new SSD to 512b or how flash the old one to 4kn. Boot-pool is not a pool where performance really matters, and the message is more of a notice than a problem.

1 Like

This may be a BIOS issue even before it becomes a GRUB issue. See here:

2 Likes

This is probably the way to go - I am running on SAS2008 card so will switch boot SSDs to hardware mirror, and keep data drives as ZFS mirrors. This will force me to reinstall but this is ok. Will also need to make sure I am in UEFI boot mode.

2 Likes