Can't offline a disk

Earlier today I noticed that my CCTV pool is degraded, in that one disk has numerous read and write errors. I select to offline the disk but got this:

[EZFS_NOREPLICAS] cannot offline /dev/disk/by-partuuid/451b5352-854a-4035-8655-bb4571508556: no valid replicas.

And in Details:

Error Name: EZFS_NOREPLICAS
Error Code: 2019
Reason: [EZFS_NOREPLICAS] cannot offline /dev/disk/by-partuuid/451b5352-854a-4035-8655-bb4571508556: no valid replicas
Error Class: CallError

I need to remove the disk but how to do so now. The pool is x2 disks, where one is a recently replaced 2Tb, and the other is an older 1Tb

Thanks

You need to replace the disk. Not remove it.

Whats the topology of that pool ? You say 2x disks… mirror or stripe ?

1 Like

The error makes it sound like you have a stripe pool.

With such a pool, you would need to replace in place. Meaning you add in the new disk without off-lining the old disk. Then you initiate the replacement command. Afterward, the old disk is automatically removed from the pool and can be physically removed. This assumes the original / bad disk is still mostly functional. If not, your pool is likely toast.

While ZFS stripe pools have a purpose, users need to clearly understand the difference.

1 Like

The topology? says in storage that it is 2 x DISK | 1 wide | Mixed Capacity. From memory , mirrored I think.

When I choose to Replace the disk , which is ‘sdg’, I am asked for a member disk in the drop down box, but there isn’t one, obviously.

I thought you had to offline a disk before replacing or removing.

I don’t have another disk to replace it with anyway, and as the other disk is 2Tb, that is big enough for my needs anyway now.

What does the command zpool status say?

root@freenas[~]# zpool status
  pool: CCTV
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 00:10:58 with 0 errors on Sun Feb 15 11:12:40 2026
remove: Removal of /dev/disk/by-partuuid/451b5352-854a-4035-8655-bb4571508556 canceled on Sun Feb 15 11:34:11 2026
config:

        NAME                                    STATE     READ WRITE CKSUM
        CCTV                                    DEGRADED     0     0     0
          2d15b709-d0ee-4133-b0f0-f3bd39647e2e  ONLINE       0     0     0
          451b5352-854a-4035-8655-bb4571508556  DEGRADED   269     0   268  too many errors

errors: No known data errors

That is a ZFS Stripe pool. You can NEVER off-line either device, because that would basically take the entire pool off-line.

However, you can remove the unwanted device. But, you may need to clear the errors first, then re-attempt to remove the unwanted device.

1 Like

I just ran zpool clear CCTV, nothing appeared to have happened, then looking at storage and VDEVS for the said pool, all seems to be ok now, ie no errors!!

Occasionally transient errors occur that don’t permanently exist. That may be what happened because your pool said:

errors: No known data errors

However, if that storage device is going bad, either replacing it or removing it sooner is suggested. Because, with a Stripe pool, one completely failed disk means loss of the entire pool.

Thanks and I have just tried to remove it after I shut down the cctv program which writes data to the pool, but at 70% it said [Errno 16] Device or resource busy: ‘/dev/sdg’

Nothing is being written to the disk now so why is it busy

You can only remove a disk from a stripe if all data fits on the one drive. Is that the case ?

Well there are 2 disks, one is 2Tb and the one I want to remove is 1tb, and to be honest I have no idea if all the data fits on one drive. I suppose as a last resort I could delete all the data and remove the datasets, then start afresh. But that is a last resort.

You need to figure that out.

Fair enough, I’ll just leave it as is then

Sure, or you could just check the dashboard if you have more or less than 2TB of data…

When you run the clear command you essentially tell the system to forget any previously reported ZFS errors. What you would typically do after that, if you’re not actually sure that you have solved whatever caused the errors, is to run a new scrub on the pool to see if they reappear or not.

Ok , so after checking on the storage, there is 43Gb in use from 2.63Tb which is just videos and pictures. If I deleted the data then removed the 1Tb disc, I would assume the datasets would disappear.

I just want to know what my options are, and I could use that 1Tb disk for other storage

After another scrub, there are still errors and the pool is degraded , or the same 1Tb disc is.

Then there is something wrong with the disk, the cables, the controller, or even the PSU.

Run a smart long test and post the results here.

Sorry about this, a bit of a silly question, as I am running TrueNAS 25.10.1 - Goldeye, where do I find smart test. I’ve googled it, but I can’t see where it is or find it.

Google says go to data protection/smart tests, but smart test isn’t there. I’ve also done storage / view disks select disk and then smart test button at the top, but there isn’t one.

25.10 removed the smart GUI…