Does this VDEV include this disk or not?

Can anyone help me understand/fix this? Recently migrated a running system from Core to Scale. I had this 6-disk VDEV in core, raidz2.


When i started Scale, disk sdg was shows as here in this image - but it ALSO showed as online in a different pool. I detached form the other pool, so no screenshot, but i do not show unassigned disks nor do i see any status for this disk in this pool.

I have NO IDEA what is going on.

It is helpful to list misc. information on the hardware and software. Others have an excellent list of commands, but I don’t have that handy.

Could you supply the output, in CODE tags, from these commands?

zpool list -v
zpool status -v

Then also supply the hardware ports used for disk connections, and the disk make & models.

(post deleted by author)

(post deleted by author)

Make it
sudo zpool list -v
or
/usr/sbin/zpool list -v

1 Like

(post deleted by author)

colin2000@freenas:~$ sudo zpool status -v
[sudo] password for colin2000: 
  pool: backup1
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        backup1                                 ONLINE       0     0     0
          7b632670-a767-43ae-a9c0-a5181e05f6fc  ONLINE       0     0     0

errors: No known data errors

  pool: backup2
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        backup2                                 ONLINE       0     0     0
          35acc9f1-f6cc-4105-b803-b6b3a4b0f40a  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Jun 21 03:45:09 2025
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p2  ONLINE       0     0     0

errors: No known data errors

  pool: earth
 state: ONLINE
  scan: scrub repaired 0B in 01:56:28 with 0 errors on Sun May 25 01:56:30 2025
config:

        NAME        STATE     READ WRITE CKSUM
        earth       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdq2    ONLINE       0     0     0
            sdp2    ONLINE       0     0     0
            sdg2    ONLINE       0     0     0
            sdh2    ONLINE       0     0     0
            sdi2    ONLINE       0     0     0
            sdf2    ONLINE       0     0     0

errors: No known data errors

  pool: kobol
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub in progress since Tue Jul  8 21:41:09 2025
        3.99T / 58.9T scanned at 91.9M/s, 802G / 58.9T issued at 18.0M/s
        0B repaired, 1.33% done, 39 days 01:21:31 to go
config:

        NAME                      STATE     READ WRITE CKSUM
        kobol                     DEGRADED     0     0     0
          raidz3-0                DEGRADED     0     0     0
            sdd2                  ONLINE       0     0     0
            sdc2                  ONLINE       0     0     0
            sda2                  ONLINE       0     0     0
            sde2                  ONLINE       0     0     0
            sdb2                  ONLINE       0     0     0
            sde2                  OFFLINE      0     0     0
            16083278097396954957  FAULTED      0     0     0  was /dev/sde2
            sdm2                  ONLINE       0     0     0
            sdk2                  ONLINE       0     0     0
            sdj2                  ONLINE       0     0     0
            sdn2                  ONLINE       0     0     0

errors: No known data errors
colin2000@freenas:~$ sudo zpool status -v
[sudo] password for colin2000: 
  pool: backup1
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        backup1                                 ONLINE       0     0     0
          7b632670-a767-43ae-a9c0-a5181e05f6fc  ONLINE       0     0     0

errors: No known data errors

  pool: backup2
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        backup2                                 ONLINE       0     0     0
          35acc9f1-f6cc-4105-b803-b6b3a4b0f40a  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Jun 21 03:45:09 2025
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p2  ONLINE       0     0     0

errors: No known data errors

  pool: earth
 state: ONLINE
  scan: scrub repaired 0B in 01:56:28 with 0 errors on Sun May 25 01:56:30 2025
config:

        NAME        STATE     READ WRITE CKSUM
        earth       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdq2    ONLINE       0     0     0
            sdp2    ONLINE       0     0     0
            sdg2    ONLINE       0     0     0
            sdh2    ONLINE       0     0     0
            sdi2    ONLINE       0     0     0
            sdf2    ONLINE       0     0     0

errors: No known data errors

  pool: kobol
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub in progress since Tue Jul  8 21:41:09 2025
        3.99T / 58.9T scanned at 91.9M/s, 802G / 58.9T issued at 18.0M/s
        0B repaired, 1.33% done, 39 days 01:21:31 to go
config:

        NAME                      STATE     READ WRITE CKSUM
        kobol                     DEGRADED     0     0     0
          raidz3-0                DEGRADED     0     0     0
            sdd2                  ONLINE       0     0     0
            sdc2                  ONLINE       0     0     0
            sda2                  ONLINE       0     0     0
            sde2                  ONLINE       0     0     0
            sdb2                  ONLINE       0     0     0
            sde2                  OFFLINE      0     0     0
            16083278097396954957  FAULTED      0     0     0  was /dev/sde2
            sdm2                  ONLINE       0     0     0
            sdk2                  ONLINE       0     0     0
            sdj2                  ONLINE       0     0     0
            sdn2                  ONLINE       0     0     0

errors: No known data errors
BOTTOM CAGE Serial: Segate model : /dev:
1) SAS1-P2 – 0HZ49 exos x18 14tb Offline/failed
2) SAS 1-P3 - 0MC5E ZTM0MC5E exos x18 14tb Offline/failed
3) SAS 1-P4 – 0M495 exos x18 14tb Offline/failed
4) SAS 2-P1 – 0MD2P ZTM0MD2P exos x18 14tb sdb
5) SAS 2-P2 - 0M4LX ZTM0M4LX exos x18 14tb sdd
MIDDLE CAGE
1) SAS 3-P4 – HGN9 ZL2CHGN9 exos x18 14tb sdj
2) SAS 4-P1 – YQ2D WSD0YQ2D Ironwolf 6tb sdf
3) SAS 4-P2 - SHT8 ZA1ESHT8 Ironwolf 6tb sdi
4) SAS 4-P3 – YG05 WSD0YG05 Ironwolf 6tb sdh
5) SAS 4-P4 – Z3JB WSD0Z3JB Ironwolf 6tb sdg
TOP CAGE
1) SAS 2-P3 – 2HDT5 ZLW2HDT5 exos x18 14tb sdc
2) SAS 2-P4 – ZHDSQ ZLW2HDSQ exos x18 14tb sda
3) SAS 3-P1 – KK61 ZL23KK61 exos x18 14tb sdn
4) SAS 3-P2 - 0NDYR ZTM0NDYR exos x18 14tb sdk
5) SAS 3-P3 – N7T91 ZL2N7T91 exos x18 14tb sdm
UNDERSIDE
SAS 1-P1 – 2HE7L ZLW2HE7L exos x18 14tb sde
Mobo SATA 1 WSD0TYL6 Ironwolf 6tb sdp
Mobo SATA 2 WSD2NP05 Ironwolf 6tb sdq

PS i know i have a degraded pool - i moved recently and lost 2 disks, swapped in my backups, then two more failed (lucky timing!) and i am still waiting on the replacements to arrive, while i sweat bullets waiting for another disk to fail. Needless to say, more spare disks on hand from now on.

Based on that information, I’d say the issue for “sdg” is in the GUI. Nothing to be concerned about for the “earth” pool.

As a side note, I’d run the scrubs a bit more often. At least once a month. Others run scrubs every 2 weeks. Pool “earth” had it’s last scrub on May 25th.

1 Like