Spare replaces bad drive, but still remains as a spare

I’m running TrueNAS Scale (Dragonfish-24.04.2.3).
I created a RAIDZ1 (3 wide, 1 spare), been running fine. One of the drives went bad, it emailed me (so the email config worked), by the time I logged in to the TNS dashboard, the spare already took over for the bad drive, and was 1.4% into resilvering.

Now that resilvering is complete, I noticed the spare drive is still a spare.

Q: Is this normal?

Thanks all!

Yes. ZFS does not take decision about replacing drives; the administrator should.

Thanks etorix!
If this is “normal” for ZFS, then I’m fine with it :grinning:

You have 3 choices now:

  1. Replace the faulty drive, Spare goes back to being a Spare.
  2. Remove the faulty drive, such that the current Spare becomes a normal drive. And no more Spare, (or you can add another one later).
  3. Ignore the problem until something else goes wrong :slight_smile:

That is one nice thing about ZFS. If you should decide that the currently active spare is the best drive to remain in the pool, just remove faulty drive.

I don’t have the GUI or command line instructions handy to make this happen. But, if you want that, someone here can likely assist you with that option.

Thanks for the step-by-step Arwen! :+1: