Remove, not replace drives in a pool

I had a bunch of 500gb hard drives kicking around. So I setup my zfs pool as 3 mirrored sets of 500gb drives (so 6 in total) I am using about 700gb at the moment.

A while ago I got my hands on a pair of 2tb drives, so I replaced one of the drives in one of the mirrored sets with the 2tb. Waited for it to resilver. Then replaced the other drive in the mirror and waited for resilver. So currently I have 3 mirrored pairs, one 2tb pair and 2 500gb pair.

I am wondering if there is a way, since the data will all fit on the 2tb pair, of removing the 2 500gb mirrored pairs without loosing data.

Sure. You can remove the other two vdevs through the GUI.

1 Like

really, that is it? It will resilver the data to the remaining drives?

Indeed it will.

Ok, I tried it and I got a Permission Denied response.

The pool had originally been on a Truenas Core install and then imported into a Truenas Scale. Could that be an issue?

Shouldn’t be, but if the system is notifying you about a pool upgrade being available you may need to do that.

That sounds very odd. Can you share the exact error message, and what you did to get there?

I am not an expert but as far as I can tell from your explanation you currently have:

  • Vdevs

    1. 2x 2TB mirrored
    2. 2x 500GB mirrored
    3. 2x 500GB mirrored
  • Pools

    1. All the above vdevs in a single pool.

If I have got this wrong, then please ignore everything else that follows as it is likely not going to be right for you.

Now if I understand ZFS correctly (and I may have got this completely wrong, so others please chime in here) the problem is that all your data is somewhere in the single pool spread across the 3 vdevs.

Your previous pool size was 1.5TB (3x mirrored pairs of 500GB drives) and as you say this will fit onto a single 2TB drive. Equally it will fit onto the 4x 500GB drives if they are configured as Z1.

So, it seems to me that the first step is to decide what your final layout should be.

In the absence of additional information, my (rank amateur) advice would be to end up with 2 pools - the first is the mirrored pair of 2TB drives, the second a Z1 set of 4x 500GB drives. Having two pools is not going to be quite as flexible as having a single pool from the perspective of free-space management, but it is more flexible in terms of your later management of the disks.

The 2nd step is to determine a strategy for freeing up one or both 2TB drives. Here is my rank amateur straw man suggestion on how to do this:

  1. Mirrored pools should be able to be split into two identical non-redundant pools using the zpool split command.

  2. Destroy one of these pools - you now have 1x 2TB and 2x500GB disks which are unused.

  3. Create a new non-redundant pool using the 2TB disk.

  4. Move the data from the old pool to the new pool. This may involve copies, deletes, renames, pool exports/imports etc. all done from the command line. (Make sure you have SSH access for this rather than using the GUI Console web page.)

  5. Once you have all your data moved to the 2TB pool and replacing the existing pool, then you can destroy the old pool and use the 4x 500GB disks to create a new Z1 pool, and add the 2nd 2TB drive as a mirror to the existing 2TB pool.

Job done.

All this needs to be done using the command line as root. The potential for making a mistake and losing your data is a real one. (You will also have a period without redundancy, so you need to accept the risk of a disk failing during this period - check the SMART data and do a scrub before you start to ensure that the disks are all looking sound.)

So, IMO (as a project and programme manager for 30 years) is that you need to make a detailed plan of how you are going to achieve this, and have that plan checked by someone who is a ZFS expert before you start - and ideally have that expert willing and available to help if you run into any problems.

Good luck.

The ZFS pool feature for disk removal needs to be enabled. Use the following against the pool you want to perform the disk removal to see if the feature is enabled:
zpool get feature@device_removal MY_POOL

If the feature is disabled, you need to;
zpool set feature@device_removal=enabled MY_POOL

Then you should be able to perform the task via the GUI.

Note that whence you both enable and use this disk removal feature, you can’t use the pool on an older version of ZFS. That should not be a problem as the feature has been available for several years now. And both TrueNAS Core and SCALE both have the underlying ZFS support for disk removal, (aka ZFS 2.2.2).

4 Likes

As I said I am not an expert, but just to clarify what Arwen has said, as far as I can determine by reading some documentation zpool remove allows you to remove vdevs from a pool, zfs automatically moving the data on the removed vdevs to other vdevs before removing it.

So this should be the information you need to be able to remove the 2TB mirror pair from the existing pool and use them to create a second pool. But this will only work if the existing data will fit into the 2x 500GB mirrored vdevs. If it won’t then you need to remove redundancy from the 2TB mirror and use the single drive to create a new pool, by either:

  1. copy all data to the single 2TB drive, destroy the existing pool and mirror the 2TB drive and create a 2nd Z1 pool from the 4x 500GB drives; or

  2. move enough (ideally non-critical) data to the non-redundant 2TB pool so that the remaining data will fit into the 2x 500GB mirrored vdevs, and then do the zpool remove to free up the 2nd 2TB drive and mirror it with the 1st one. Then you can move the rest of the data and delete/create the old pool as a Z1.

Of the two, and in the absence of any better recommendations from anyone more knowledgeable than me, I would probably do option 2. because it leaves the smallest amount of least critical data on non-redundant disks for the shortest amount of time.

I don’t think that’s what OP is asking for. I think his objective is instead to remove the 4x 500 GB disks from the pool entirely, leaving all the data on the single pair of 2 TB disks. And the answer to that with modern versions of TrueNAS should be simple: remove the vdevs, wait for the data evacuation to complete, then remove the 500 GB disks.

2 Likes

@dan Of course that is by far the simplest solution. D’oh!!!

Yes, that is the goal.

I just have to figure out the permission error

I checked, the pool feature device_remove is enabled.

I am still getting the permission error though.

You still haven’t answered:

1 Like

I could be off base, but I believe if you have the apps service active, any shares active, or any VM’s active you will get the permission denied error. You would need to stop all services (like SMB), remove apps (app settings -->> unset pool, can be added back later), or stop any VM’s using the pool first so the system can remove the vdevs and reorder the data onto the 2TB drives.

I won’t say you’re wrong, but I think that would be a bug–you’re not offlining the pool; you’re just removing vdevs from the pool. This should be transparent to any applications.

But in almost two months, OP has yet to provide the exact error message, or what he did to get it.

2 Likes

Sorry

I go to the storage menu and under my pool, I go to manage storage.

I then select the mirror I want to remove from the pool and select remove

A window pops up with a progress bar, gets to about 20%, then I get an error saying:

Error: [EZFS_PERM] cannot remove XXXXXXXXXXXXXXXXXXXX: permission denied

I just tried turning off all smb, nfs, s.m.a.r.t. and SSH services, rebooted and tried again.

Same error.

There should be a “more info” link on that error; the results when you click that would be helpful, as would a screen shot of the pool status.

image