Need clarity if I have a spare issue or the spare disk is failing with primary and next steps

Please see attached image. (edit: looks like I am not allowed to embed images yet or links to screenshot… )

It is looking like my da5 disk is failing soon but it is online with bad sectors but it is listed under spare also the spare disk da6 is listed in the same section but I have alerts for bad sectors for the spare disk potentially failing soon too… not sure if da5 was ever part of the pool or if it was a spare at one point… been a while…. 10 disk pool. If so, how do I get my new spare da11 in there to replace both da5 and da6?

thanks!

Version:

TrueNAS-13.0-U6.2

I had to edit the question as I am not permitted to post a screenshot it seems…

You should have received an introductory email. Work through the TrueNAS bot tutorial and view a few other posts and you should get rights to post images.

Posting results from zpool status -v should let us see your current pool status and layout. Please use Preformatted text (</>) or Ctrl+e for the results

1 Like

# zpool status -v
  pool: Datastore2
 state: ONLINE
  scan: resilvered 6.34M in 00:00:02 with 0 errors on Mon Mar  9 15:32:45 2026
config:

        NAME                                              STATE     READ WRITE CKSUM
        Datastore2                                        ONLINE       0     0   0
          raidz1-0                                        ONLINE       0     0   0
            gptid/44166c95-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            gptid/48d982ab-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            gptid/4fb803d6-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            gptid/45f22c39-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            spare-4                                       ONLINE       0     0   0
              gptid/5d79e033-6f24-11ef-9874-000c29560afb  ONLINE       0     0   0
              gptid/67ea28a5-6f24-11ef-9874-000c29560afb  ONLINE       0     0   0
            gptid/58e24a2b-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            gptid/5e740d31-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
            gptid/6940707e-6f24-11ef-9874-000c29560afb    ONLINE       0     0   0
        spares
          gptid/67ea28a5-6f24-11ef-9874-000c29560afb      INUSE     currently in use
          gptid/4de23261-1bfd-11f1-80e8-000c29560afb      AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE

  scan: scrub repaired 0B in 00:00:09 with 0 errors on Fri Mar  6 03:45:09 2026
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors
          da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: tst_16TB_dskA1
 state: ONLINE
...
CRITICAL
Device: /dev/da6 [SAT], 608 Currently unreadable (pending) sectors.
2026-03-09 15:32:55 (America/New_York)
Dismiss
notifications_active
CRITICAL
Device: /dev/da6 [SAT], 608 Offline uncorrectable sectors.
2026-03-09 15:32:55 (America/New_York)
Dismiss
notifications_active
CRITICAL
Device: /dev/da5 [SAT], 400 Currently unreadable (pending) sectors.
2026-03-09 15:32:57 (America/New_York)
Dismiss
notifications_active
CRITICAL
Device: /dev/da5 [SAT], 400 Offline uncorrectable sectors.
2026-03-09 15:32:57 (America/New_York)
Dismiss

camcontrol devlist may help make sense of what’s what.
But basically the current da5 and da6 are due for RMA or replacement: Too many bad sectors. smartctl -x /dev/daX shall confirm the diagnosis.

1 Like