My system was originally TrueNAS Core 13 which I then updated to Scale I think 22 and then upgraded from 22 to 24.10. Due to a bug on my boot pool where it thought it had an exported pool on it, upgrading to 25.04 failed, and the JIRA ticket on the issue said the best thing to do was back up my config, fresh install 25.04.2.4 and restore my config, which worked.
However today I came to replace a failed drive and I had trouble as the replacment drive appeared in the list but when I went to do the replace, it said it was already in use in the “data” pool.
I powered down the NAS and started up without the new drive, checked the drive to ensure it was clean in my desktop PC, then powered down the NAS again and plugged in the new drive. This time I was able to do the replace and it is resilvering.
However now I notice that TrueNAS thinks one drive from my “Archive” pool is in my “Data” pool. The common thing I notice here is TrueNAS scale seems to address my drives as sda through sdm instead of a GUID like TrueNAS core did.
Originally I had 4x WD Red Plus/Pro 8TB drives in a RAIDZ1 as “Data” pool, and 6x Seagate Archive 8TB drives (I know SMR Bad, but I got them before the issues were widely know and never had to resilver) in a RAIDZ2 as “Archive” pool.
I also have two WD Red 2TB SATA SSDs (one M.2 SATA and one 2.5" SATA) as a MIRROR as “Fast” pool which I use for Apps, VMs, and System DataSet.
The “Boot-Pool” is a SK Hynix SSD connected via USB. It is not a normal flash drive.
HBA is a genuine LSA 9207-8i and since I have more than 8 disks, some are on the Intel Xeon D 1541’s SATA controller which is in AHCI mode.
I’ve had two failures of the WD Red 8TB drives in the “Data” pool and I’ve replaced one successfully with a Seagate IronWolf 12TB drive (ZR909A6L), and now this issue has occurred after the second failure with another Seagate IronWolf 12TB drive (R909A41).
The drives with serial Z840E are the 6x Seagate Archive drives and they should be members “Archive” pool. However now Z840EXX5 is saying it is in the “Data” pool which is not correct.
The “Data” pool currently comprises of the 2x WD Red Pro/Plus 8TB drives with serials beginning with VYJ, and the 2x IronWolf 12TB drives with serials starting with ZR909A.
For some reason TrueNAS UIthinks there are now 5 drives in “Data” and 5 drives in “Archive”. However ZFS CLI knows the true correct state by the looks
root@freenas[~]# zpool status
pool: archive
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 13:59:34 with 0 errors on Wed Oct 1 14:00:17 2025
config:
NAME STATE READ WRITE CKSUM
archive ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EZ8J-part2 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EXKY-part2 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EMQD-part2 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EX45-part2 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EXX5-part2 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EX5F-part2 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:02:47 with 0 errors on Fri Oct 17 03:47:49 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sdm3 ONLINE 0 0 0
errors: No known data errors
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Oct 20 11:32:29 2025
24.7T / 25.4T scanned at 788M/s, 21.2T / 25.4T issued at 675M/s
5.07T resilvered, 83.58% done, 01:47:49 to go
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sdi2 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
14030011023509234989 FAULTED 0 0 0 was /dev/sde2
f3634186-4f97-46af-8a40-9903c847684b ONLINE 0 0 0 (resilvering)
da00085c-ee41-424a-bdb2-b79a533f0d3e ONLINE 0 0 0
sdl2 ONLINE 0 0 0
errors: No known data errors
pool: fast
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:14:32 with 0 errors on Sun Oct 5 00:14:34 2025
config:
NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdk2 ONLINE 0 0 0
sdj2 ONLINE 0 0 0
errors: No known data errors
root@freenas[~]#
One thing I noticed is that in TrueNAS Core drives were listed as a GUID while in TrueNAS Scale they’ve changed to sd*# for the “data” pool and some drive ID made up of model and serial number for the “Archive” pool.
I have the drives set to power down after 30 minutes of inactivity (Yes I know this is frowned upon) but most of my drives last the best part of a decade doing so. They only spin up a couple of times per day. I’m pretty sure ZFS is resilvering the correct drives as feeling them in my case with good airflow, the 6x Seagate Archive drives are cold and not vibrating, while the 2x Seagate IronWolf and 2x WD Red drives are warm and vibrating.
Maybe this will sort itself out once the resilver finishes, but if it doesn’t do I have to do something like this - Cannot replace disk due to TrueNAS Scale using sd# names for disks?
Also I have backup of both pools on another old N36L HP Gen7 MicroServer running 3x 24TB Exos as a backup of “Data” and one single 28TB Exos as a backup of “Archive”, but I’d prefer a fix that doesn’t require rebuilding either pool.
Any ideas?




