Replacing failed device in boot mirror

So I replaced a failed device in a boot mirror on my scale system and noticed something.

The boot process took a lot longer because it failed to initialise the swap space. So I looked at the pool status and I get

  pool: boot-pool                                                                                                                                                                                                   
 state: ONLINE                                                                                                                                                                                                      
  scan: scrub repaired 0B in 00:00:35 with 0 errors on Mon Apr 22 20:34:10 2024                                                                                                                                     
        NAME        STATE     READ WRITE CKSUM                                                                                                                                                                      
        boot-pool   ONLINE       0     0     0                                                                                                                                                                      
          mirror-0  ONLINE       0     0     0                                                                                                                                                                      
            sdj     ONLINE       0     0     0                                                                                                                                                                      
            sdk3    ONLINE       0     0     0

Notice how the original drive has the partition number on it, the replaced drive doesn’t.

Running fdisk, the replaced drive doesn’t have any partitions, which I presume is why the swap failed to initialise. Is this a bug?

And how did you do that? Did you manually run zpool replace, by any chance?

1 Like

bit offtopic but why would you need a mirrored bootpool?
save your config from time to time and you can always replace the whole bootdisk without any issues

To be able to recover from an inconvenient boot device failure in 5 minutes instead of two hours; and to be able to recover with zero downtime from a more conveniently-timed boot device failure.


It is also on the recommendations - I’m just sorting out my kit for a first install - and the instructions say very clearly:

We do not recommend installing TrueNAS on a single disk or striped pool unless you have a good reason to do so.


users often boot TrueNAS systems from 2.5" SSDs and HDDs (often mirrored for added redundancy).

That’s why I’ve planned to do it.

I did! The GUI kept giving me an error about an empty device so that was the only way

Great if you’re always in the same place as your NAS and can afford any amount of downtime. Not great for minimising downtime and recovery time.

Good point on downtime, but it takes only 5 minutes to recover failed boot drive if you have config file at least for me. It’s just a simple reinstall + upload config. Ok, the physical replacement will take like another 5-10 minutes tops, nowhere near 2 hours.

Not a bug then. You have to use the GUI for these things to work seamlessly - or you can manually replicate what the GUI does.

The immediately available solution is to replace the less-than-normally-replaced disk via the GUI. Of course, this is contingent on the specifics of that error you mentioned…

Hmm okay, I’ll give that a try and see if it resolves the issue. Thanks :slight_smile:

yup that’s solved it. I had to detach the drive via the command line with zpool detach then was able to re-attach it via the GUI. The partitions now look correct

1 Like

Truenas does a lot more than just attaching a device to a pool when boot mirroring.

1 Like