One of my 2 Z2 vdevs that form my main storage pool is shown as being degraded. I believe this is because I accidentally restarted the server with the drive disconnected from when I was performing some maintenance.
My question is what do I need to do to fix the issue.
When I expand the array I see
zpool status
yields
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
sdk2 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdi2 ONLINE 0 0 0
sdh2 ONLINE 0 0 0
sdg2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
sdj2 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
sdb2 ONLINE 0 0 0
14853449693688436450 UNAVAIL 0 0 0 was /dev/disk/by-partuuid/0678952a-bc37-40f4-a8dc-49ff3fd1b4bc
sdn2 ONLINE 0 0 0
sdd2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
sdf2 ONLINE 0 0 0
errors: No known data errors
Disk sdb
shows options Extend
, Detach
and Offline
Does this mean that some point I may have swapped a fresh drive in before offlining the faulted one or done something to convince the array there should be something there that isn’t - I see 12 drives in total (2 x 6) as expected?
The numbered drive shows options Detach
in the ZFS Info
section or Replace
in the Disk Info
section of the UI for Manage Devices. All the other devices seem fine.
Can someone advise the correct course of action here?