When i started Scale, disk sdg was shows as here in this image - but it ALSO showed as online in a different pool. I detached form the other pool, so no screenshot, but i do not show unassigned disks nor do i see any status for this disk in this pool.
colin2000@freenas:~$ sudo zpool status -v
[sudo] password for colin2000:
pool: backup1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
backup1 ONLINE 0 0 0
7b632670-a767-43ae-a9c0-a5181e05f6fc ONLINE 0 0 0
errors: No known data errors
pool: backup2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
backup2 ONLINE 0 0 0
35acc9f1-f6cc-4105-b803-b6b3a4b0f40a ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Jun 21 03:45:09 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p2 ONLINE 0 0 0
errors: No known data errors
pool: earth
state: ONLINE
scan: scrub repaired 0B in 01:56:28 with 0 errors on Sun May 25 01:56:30 2025
config:
NAME STATE READ WRITE CKSUM
earth ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdq2 ONLINE 0 0 0
sdp2 ONLINE 0 0 0
sdg2 ONLINE 0 0 0
sdh2 ONLINE 0 0 0
sdi2 ONLINE 0 0 0
sdf2 ONLINE 0 0 0
errors: No known data errors
pool: kobol
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub in progress since Tue Jul 8 21:41:09 2025
3.99T / 58.9T scanned at 91.9M/s, 802G / 58.9T issued at 18.0M/s
0B repaired, 1.33% done, 39 days 01:21:31 to go
config:
NAME STATE READ WRITE CKSUM
kobol DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
sdd2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sde2 OFFLINE 0 0 0
16083278097396954957 FAULTED 0 0 0 was /dev/sde2
sdm2 ONLINE 0 0 0
sdk2 ONLINE 0 0 0
sdj2 ONLINE 0 0 0
sdn2 ONLINE 0 0 0
errors: No known data errors
colin2000@freenas:~$ sudo zpool status -v
[sudo] password for colin2000:
pool: backup1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
backup1 ONLINE 0 0 0
7b632670-a767-43ae-a9c0-a5181e05f6fc ONLINE 0 0 0
errors: No known data errors
pool: backup2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
backup2 ONLINE 0 0 0
35acc9f1-f6cc-4105-b803-b6b3a4b0f40a ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Jun 21 03:45:09 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p2 ONLINE 0 0 0
errors: No known data errors
pool: earth
state: ONLINE
scan: scrub repaired 0B in 01:56:28 with 0 errors on Sun May 25 01:56:30 2025
config:
NAME STATE READ WRITE CKSUM
earth ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdq2 ONLINE 0 0 0
sdp2 ONLINE 0 0 0
sdg2 ONLINE 0 0 0
sdh2 ONLINE 0 0 0
sdi2 ONLINE 0 0 0
sdf2 ONLINE 0 0 0
errors: No known data errors
pool: kobol
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub in progress since Tue Jul 8 21:41:09 2025
3.99T / 58.9T scanned at 91.9M/s, 802G / 58.9T issued at 18.0M/s
0B repaired, 1.33% done, 39 days 01:21:31 to go
config:
NAME STATE READ WRITE CKSUM
kobol DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
sdd2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sde2 OFFLINE 0 0 0
16083278097396954957 FAULTED 0 0 0 was /dev/sde2
sdm2 ONLINE 0 0 0
sdk2 ONLINE 0 0 0
sdj2 ONLINE 0 0 0
sdn2 ONLINE 0 0 0
errors: No known data errors
PS i know i have a degraded pool - i moved recently and lost 2 disks, swapped in my backups, then two more failed (lucky timing!) and i am still waiting on the replacements to arrive, while i sweat bullets waiting for another disk to fail. Needless to say, more spare disks on hand from now on.
Based on that information, I’d say the issue for “sdg” is in the GUI. Nothing to be concerned about for the “earth” pool.
As a side note, I’d run the scrubs a bit more often. At least once a month. Others run scrubs every 2 weeks. Pool “earth” had it’s last scrub on May 25th.