One of my two mirrored boot-pool SSD’s failed on me. After replacing the failed drive, I started seeing the error below. I did a search of the forum but there aren’t many mentions of this and even fewer explanations of the cause. I have never attempted to “upgrade” ZFS on the boot volume as I’ve read the cautions against doing so. It doesn’t appear to be causing any issues but is there a way to address this, I presume, error? Or do I need to place a reinstall sometime in the future?
That “error” has nothing to do with your disk replacement. It is likely a new version of TrueNAS SCALE implemented a feature of ZFS to restrict the feature set for specific reasons.
For example, Grub has certain ZFS features it can support but others it can’t support. Thus, if one of those un-supported by Grub features is used on the boot-pool, your TrueNAS SCALE server may be in trouble.
I don’t know why it would have started showing up. Though, if it was not present before your disk replacement, then I would think you did a SCALE update which implemented it.
I would;
Make a copy off server of your TrueNAS SCALE configuration
Prepare to re-install same version, (aka make a bootable flash drive)
Reboot
If everything is fine after reboot, then you are good. If not, you are ready to re-install.
Thanks, the system continues to boot just fine but this message concerns me.
I have no issues reinstalling which I’m hopeful will eliminate this message. I believe you can only upgrade ZFS on the boot-pool via the command-line so I can say with confidence that I did not do that so I’m very troubled by this.
As I said, it is a new feature of OpenZFS that tries to protect pools from being updated beyond what the user wants. It is possible that a certain feature is enabled, but UNUSED, so that Grub is fine.
As many will tell you here, don’t mess with the boot-pool attributes, like upgrading, or boot-pool Dataset attributes. TrueNAS is more of an Appliance package of software, to be used as is.
I’m in the exact same boat. Truenas Scale 24.10, all was working fine until a few days ago i got a message that one drive in the boot pool is bad.
I detached the drive using SSH (there is no way to do it from GUI), powered off, changed the bad drive as i do not have spare sata ports, booted up and attached the new drive to the boot pool.
Resilvering was fast with no error messages.
Then i guess I’ll just leave it like that. Would be nice thou to have some sort of explanation as to why that happened when changing a drive in the boot pool.