TrueNAS Scale boot pool warning 'compatibility' property

Recently upgraded from TrueNAS 13 to TrueNAS Scale Dragonfish , proceeded to upgrading my pools and am now receiving the following when performing zpool status :

pool: freenas-boot
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the ‘compatibility’ property.
action: Consider setting ‘compatibility’ to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Jun 11 03:45:04 2024

I have not rebooted the system yet, but I have a backup of the latest configuration. Just curious if this is anything to be weary about or need to resolve.

In general, you never want to upgrade your boot pool’s feature set.

Now that you have, and potentially made it un-bootable, make a backup of your configuration, make a new install boot device and reboot. If the NAS does not reboot, simply re-install from the new install boot device and restore your configuration. Then don’t upgrade your boot pool features.

Even updating data pool features can be problematic. My philosophy is to not do so until all my boot environments allow those features. For example, a 2 year old boot environment may not support ZSTD compression, so I would not enable ZSTD pool feature.

However, if I no longer needed those old boot environments, (and still have enough for recovery), then I might remove those old boot environments. In which case, I could then both enable ZSTD pool feature and potentially use it on a ZFS Dataset. Or the entire pool.


I updated both my pools to the Dragonfish feature set in the belief that such an upgrade was necessary for interrupted replication resumes to be enabled (as opposed to always starting from scratch, as with CORE)

Since then, I’ve had yet another interrupted replication, which led to a busy remote NAS snapshot, preventing further replication. (Instant error) I cleared the busy status with a restart of the remote machine but that also led the two NAS to discard 1TB of progress and restart the snapshot replication anew.

Presumably I either did the wrong thing by restarting the remote machine or the resume feature re: replication in SCALE is not fully baked yet.