Is sequential rebuild (device_rebuild) enabled by default?

I have a zpool of mirror vdevs and I was preparing to replace some older drives in the array. I’m a little out of practice and instead of offlining a disk I detached it. I realized my mistake too late but reattached the drive and intended to have it rebuild to completion.

I was under the impression that TrueNAS supported sequential rebuild but it doesn’t look like it’s passing the -s flag to zpool attach

tank  feature@device_rebuild         enabled                        local
2024-05-22.11:50:52  zpool detach tank /dev/gptid/8ad82a0a-c2c4-11ea-9756-00155d01b900
2024-05-22.11:53:38  zpool attach tank /dev/gptid/d5ae8a4b-c333-11ea-95c9-00155d01b900 /dev/gptid/7c7bd777-186c-11ef-93dc-00155d01b900

Did I do something wrong? Is there a checkbox I should check somewhere?

You’d have to use the command-line to invoke the -s flag.

Understand it will trigger a full pool scrub after resilvering, so you’ll have to cancel that (after the resilver completes) if you prefer to save time over assurance. (If you invoke the -s flag.)

I’m fine with the pool scrub afterwards (which is sequential!).

TrueNAS does some voodoo for partitioning under the hood and I’d generally prefer to keep its layout to retain consistency: are there any tools to do so or do I have to guess its GPT layout?

Why guess anything? When you detached the drive, it leaves the partitioning alone.[1] If you use the same identifier (in this case the GPTID of the partition), it’s no different than using the GUI to attach a new device.

In other words, the drive will have a 2 GiB (default) “swap space” partition, followed by the remainder of the drive’s capacity as the second partition. Even if you detach it from the mirror vdev. (Technically, you’re detaching the partition from the vdev.)

  1. Unless the TrueNAS GUI goes a step further and wipes the GPT layout when you use it to detach? (This extra action wouldn’t be captured by the pool’s “history”.) It’s been too long since I did that with the GUI. ↩︎

I’m pretty sure it does something - if you look at the zpool history above the GPT IDs differ despite being the same physical disk.

Also I intend to replace the disks with bigger ones, so while I could make a 2GB swap partition and then the rest as a storage one, doing that by hand feels honestly annoying. I’ve done it in the past when I had to but I had hoped that there might be a GUI option for zpool replace -s or zpool attach -s, especially if I start replacing multiple devices at once.

Oh, I see that now. I didn’t look at the entire output, since I assumed they were the same.

If you want to leverage the speed of using -s, then it appears your only option is to do it all manually with the command-line, including partitioning the drive(s) with a 2 GiB swap space (“p1”) + remaining capacity for zfs_member (“p2”).

Looks like you’re right. TrueNAS does “extra” stuff when detaching a member from a vdev.

I just did a test in Arch Linux, and zpool detach + zpool labelclear doesn’t touch the partitioning or GPT table. In fact, I re-used the same PARTUUID when re-attaching the device to the vdev. (Same PARTUUID that existed before the detach + labelclear.)

A cursory look at the source code (their Python bindings for libzfs) shows that they invoke zpool attach and zpool replace without any flags, when such actions are triggered by the middleware/GUI.

I’m pretty sure it doesn’t do anything funny on zpool detach but the GUI wipes and recreates the partition structure when you use the GUI to attach a new drive.

Guess it’s just time to use the CLI.


(I would like to know if Core behaves the same as Scale in regards to above)