Replaced disk, does the data get destroyed

TN core 13.0.6U1
Experiment MB and a few of disks, Striped.

From memory nothing was selected to destroy anything or even presented.

I selected and replaced a disk with a larger one and after resilver all is good.
I then tried to attach the old disk and it’s only avaialble as a new disk.
I detached the newly resilvered disk to see it I coud then reattached the old replaced disk, but again, it’s only available as a new disk.

So is the data destroyed? I was hoping to create a test Scale version with the replaced disks and data.

Striped - Each disk is used to store data. Requires at least one disk and has no data redundancy.

You should be looking for MIRROR or RAIDZ1,2,3

**note you might getaway with it - if no data is stored on that disk.

Thank you but your response has nothing to do with my question. For further information please read the post ‘Title’ and post itself which would probably also apply to any other state of raid.

When you successfully replace a disk, its no longer part of the array.

If you instead extend the vdev (if its a mirror), then you can offline the disk… and re-online it etc…

you can even split the array and have two copies of the pool.

but replacing will result in the replaced disk no longer being part of the array… and thus… not being usable.

I know in Cobia (at least) there is an issue that I have reported where TN will format the disk being replaced BEFORE its replaced.

(Guess its time to figure out if that behaviour still exists in Dragonfish. EDIT: yes it does)

Unless explicitly invoked, nothing is “wiped” on the detached disks. (It just erases the partition table / ZFS metadata, from what I understand.) There’s a separate menu for “wiping” (under Disks), or you can do it on any spare system with a Linux live ISO and run a pattern of zeros (or random data) across the entire drive.

Although, most people who get their hands on such a drive won’t know to do the forensics or data recovery to attempt to retrieve any contiguous files.

I get that but I thought it would just either offline and or detach it. Nothing more especially since my disks are encrypted.

If I read right that would require 4 disks, Original mirror and new larger sized mirror disks to take full advanatage. I don’t do mirror for I keep backups.

This is the issue, why no longer usable it was just replaced and should just be detached when resilver is finished. The disk should then be re attacheable elsewhere if wanted, unless you want the data destroyed.

You missed the whole point of my question, nothing to do with manually detaching disks and selecting the options.

If I’m right your in bracket comment is wrong. Detaching a disk unless you select an option does nothing. You can re-attach that disk on same device or other. Example you want to move your disks to a new device.

My question is to replacing a disk which will automatically resilver the disk. It’s what happens automatically after that’s in question. Why is or is the data destroyed on the replaced disk?

In Cobia and Dragonfish, truenas attempts to quick wipe the to-be-replaced disk. If it’s not busy, It will succeed

I don’t agree with this, as offlinjng a disk should mean it won’t get erased and would in effect be a copy of the mirror.

You need to physically remove to protect the disk from TrueNAS middleware.

Maybe you’re thinking of “offline”? “Detaching” does not preserve the member device. If you try to later re-add it to your vdev, it will have to resilver as if it was a new drive. (Meaning a full resilver from scratch.)

The “offline” function allows the drive to be re-added to the vdev again. [1]


  1. With the caveat of a “catch-up” resilver which is usually quick, if you hadn’t done much with the active pool in the meantime. ↩︎

Stux you’re refering to Scale, At the moment I’m on Core but to answer:

That doesn’t make sense “wipe the to-be-replaced disk” You can’t first destroy the disk before your resilvered it unless it assumes you are replacing it because of corruption and will use a morror or other. But then stripe still exists as an option which people use which would not work with stripe, destroy first replace later?

Agreed, as there can be other reasons for replacing a disk such as going to a larger disk and possibly use the old disk on a diffrent device whilst still keeping the old data.

I don’t know or think you can for once you start to replace and it begins to resilver you have no control.

Yes, because I haven’t tested the behaviour on Core yet, so I can’t say, but on Cobia and Dragonfish…

Never the less, that is what it does.

As per the link I provided: NAS-128448.

As I said before, I don’t know exactly what the current behaviour is in Core, but in Scale (cobia and dragonfish at least), the very first thing the TNS middleware does is attempt to quick erase the to-be-replaced disk.

If you offline it first, but do not remove from the system physically, then it will succeed. If you don’t offline it, it will fail.

If you’ve started replacing a disk… but you didn’t offline it… you can yank it… or power off… and well… it might still work as a copy of a single vdev pool. You would perhaps need to force import it.

I’d have to test this, and haven’t done that sortof test in a few years.

Ok, I’m going to have to can this for my simple question has gone away, I only use Stripped disks with backups, I have no use for redundancy.

I think part of the problem is the different options available and what they do or may do and we are all on different pages as well as forgetting I’m talking about striped disks:

In - Storage / Pools / Pools Status > Offline / Replace / Remove
In - Storage / Pools > Export/Disconnect

That’s four different options.

My original question was, of sorts and also you cannot replace > resilver an Offline striped disk, so:

-Striped disk replaced with larger disk.
-Does replaced disk data get automatically destroyed when resilver is finished?

If so, can I prevent it from happening by having it detach/export the replaced disk.
Can it be done via the shell using a switch.

I’m actually kind of confused at your layout here. It sounds like you have a striped vdev (i.e. two disks in a stripe) for your pool. This means that (in theory) your data is split (striped) between the disks, meaning no single disk contains the whole of your data.

You then stated that you replaced one of them (presumably by introducing a third while keeping the existing assumed two, which would be the only way that should work at all). This should have resulted in the disk-that-was-replaced as marked in a manner which would prevent it from being put back into the pool (automatically or otherwise) so that it could be moved into another vdev (for example) or otherwise used elsewhere.

Then you wanted to re-attach the old disk (how and to what? the pool has its two striped disks so where would it go in the topology?) which logically failed as it was not part of the pool anymore.

Then you detached the new disk (I’m surprised that was an option as that would break the pool? Remember that striped means your data is spread across many disks so this is an inherently dangerous action) and were astonished that the old disk (which again, was marked as “not part of any pool anymore”) could not be “reattached” to the pool it was removed from, despite previously being indicated that it was being treated as a “new” disk.

I think we should step back a little and ponder on this little XY problem you’ve found yourself in.
What are you trying to do, and is there a better way to do it?


If you are trying to expand your striped disk with a new larger one, the Replace function on the individual disk is what you’re looking for. It will resilver the half of the data that is present on the old disk onto the new one, and make the old one available for use elsewhere or disconnected from the system.

If you are trying to make some kind of backup system (a la ye old Raid1 mirror yank-and-replace modality)… this is not possible when using a striped disk as again, a full copy of the data is not present on a single disk, and trying to reintroduce a detached disk that was explicitely removed from the pool is likely the wrong method.

Let’s take it from the top and start with what you’re trying to do first. You’ve explained what actions you’ve taken, but not the goal those actions you’re trying to do are supposed to accomplish.

Test setup:

1- ONE disk stripped. That’s it, one disk, that is the only way to add one disk only you select striped. That’s it, one disk, encrypted with two datasets fully working.

2- I then replace that one disk with a lager disk, it gets Replaced and Resilvered. That’s it , not add, not mirrror, nothing, just replace with larger disk.

3- I then want to use that replaced disk on another system with the existing data, reasons are my own, but I cannot. I only get the option to add a new disk and not re attach it and continues it’s use.

My qestion:

Does the data get destroyed after resilver or can I prevent it so I can import the old disk elsewhere.

All that should happen is once complete is Offline, remove or export the the disk and not destroy the data.

I would appreciated you not modifying a quote to indicate I said a rather silly thing.

Answered above. The disk gets marked in such a way as to make it available for use elsewise. Technically it doesn’t all get destroyed, just the parts that identify it as part of the pool. In theory if you grab a copy of the relevant portions of the disk (I think it’s like the beginning and ending few megabytes of the drive) and restore that after the drive has been transplanted to another system, it might be possible to reuse it, but I have no idea other than theoretical knowledge of how the replace function works.

IMO your best bet is likely setting up the new disk as a mirror on that single vdev (I believe the option will be labeled “Extend” on that disk), offline it so it can be yanked from the system, and in theory it can be imported into another system without hassle. Naturally the both systems will complain that one half of the mirror is gone, but you can simply remove the UNAVAIL disk and it should work fine (for various definitions of fine).

I will run through this idea momentarily to gather screenshots.

From a pool with this configuration, assuming ada0 is the original disk that will remain in the system and ada1 is the new (supposedly larger but isn’t in this system because I’m playing with spares on my actual system).

Three dots, Extend:

Select new drive to add as mirror and Extend:
image

Once resilver is done and all is Healthy, take new disk offline:
image

Yoink the disk and move it to the new system. Go to Import an existing pool:
image

Make sure the new disk indeed does have the pool on it:

New system and old system should now look like this in their status pages, the GUID and positions should be different:

Detach the missing disk:
image

In theory this should accomplish your goal. Hope that helps, your mileage may vary, results not guaranteed or warrantied, etc. etc.

Also, I very much do not recommend this whatsoever.

1 Like

Actually. As long as you make the pool mirrored, Ie extend each stripe vdev with a new disk, you can split the pool (probably shell only) and end up with two pools.

1 Like

Right, at the core of things this is the absolute requirement. I’ve never tried to mitosis a pool before, but the more complicated it gets, the more headache you’ll have as you’ll probably need double-ish (depending on how things are laid out) the disks completely synced before the split. I wonder if it would even work properly in a RaidZ2 or higher layout. :thinking:

1 Like

This should work too.

And actually, you can just yoink the disk, then remove.

Or not :wink:

Better to offline tho first :wink:

I do this fairly often with boot disks.