Is sequential rebuild (device_rebuild) enabled by default?

I have a zpool of mirror vdevs and I was preparing to replace some older drives in the array. I’m a little out of practice and instead of offlining a disk I detached it. I realized my mistake too late but reattached the drive and intended to have it rebuild to completion.

I was under the impression that TrueNAS supported sequential rebuild but it doesn’t look like it’s passing the -s flag to zpool attach

tank  feature@device_rebuild         enabled                        local
2024-05-22.11:50:52  zpool detach tank /dev/gptid/8ad82a0a-c2c4-11ea-9756-00155d01b900
2024-05-22.11:53:38  zpool attach tank /dev/gptid/d5ae8a4b-c333-11ea-95c9-00155d01b900 /dev/gptid/7c7bd777-186c-11ef-93dc-00155d01b900

Did I do something wrong? Is there a checkbox I should check somewhere?

You’d have to use the command-line to invoke the -s flag.

Understand it will trigger a full pool scrub after resilvering, so you’ll have to cancel that (after the resilver completes) if you prefer to save time over assurance. (If you invoke the -s flag.)

I’m fine with the pool scrub afterwards (which is sequential!).

TrueNAS does some voodoo for partitioning under the hood and I’d generally prefer to keep its layout to retain consistency: are there any tools to do so or do I have to guess its GPT layout?

Why guess anything? When you detached the drive, it leaves the partitioning alone.[1] If you use the same identifier (in this case the GPTID of the partition), it’s no different than using the GUI to attach a new device.

In other words, the drive will have a 2 GiB (default) “swap space” partition, followed by the remainder of the drive’s capacity as the second partition. Even if you detach it from the mirror vdev. (Technically, you’re detaching the partition from the vdev.)


  1. Unless the TrueNAS GUI goes a step further and wipes the GPT layout when you use it to detach? (This extra action wouldn’t be captured by the pool’s “history”.) It’s been too long since I did that with the GUI. ↩︎

I’m pretty sure it does something - if you look at the zpool history above the GPT IDs differ despite being the same physical disk.

Also I intend to replace the disks with bigger ones, so while I could make a 2GB swap partition and then the rest as a storage one, doing that by hand feels honestly annoying. I’ve done it in the past when I had to but I had hoped that there might be a GUI option for zpool replace -s or zpool attach -s, especially if I start replacing multiple devices at once.

Oh, I see that now. I didn’t look at the entire output, since I assumed they were the same.


If you want to leverage the speed of using -s, then it appears your only option is to do it all manually with the command-line, including partitioning the drive(s) with a 2 GiB swap space (“p1”) + remaining capacity for zfs_member (“p2”).

Looks like you’re right. TrueNAS does “extra” stuff when detaching a member from a vdev.

I just did a test in Arch Linux, and zpool detach + zpool labelclear doesn’t touch the partitioning or GPT table. In fact, I re-used the same PARTUUID when re-attaching the device to the vdev. (Same PARTUUID that existed before the detach + labelclear.)

A cursory look at the source code (their Python bindings for libzfs) shows that they invoke zpool attach and zpool replace without any flags, when such actions are triggered by the middleware/GUI.

I’m pretty sure it doesn’t do anything funny on zpool detach but the GUI wipes and recreates the partition structure when you use the GUI to attach a new drive.

Guess it’s just time to use the CLI.

Related?

https://ixsystems.atlassian.net/browse/NAS-128448

(I would like to know if Core behaves the same as Scale in regards to above)

Only tangentially related to the topic, but do you know if TrueNAS still does this custom partitioning? Looking at my partition map through lsblk, I only see extra partitions on my boot drive, not my data drives:

NAME        FSTYPE     FSVER LABEL             UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                                
└─sda1      zfs_member 5000  Smol              13769124202084177828                                
sdb                                                                                                
├─sdb1                                                                                             
├─sdb2      vfat       FAT32 EFI               C282-D0F0                                           
└─sdb3      zfs_member 5000  boot-pool         2507208359843399910                                 
sdc         zfs_member 5000  XL                13795326135035765220
sde                                                                                                
└─sde1      zfs_member 5000  XL                13795326135035765220                                
nvme1n1                                                                                            
└─nvme1n1p1 zfs_member 5000  PoolyPoolersonSSD 6959374796867253056                                 
nvme0n1                                                                                            
└─nvme0n1p1 zfs_member 5000  SSD               1240229572386050698    

I’m just trying to figure out if using zpool replace -s from the command line requires some additional steps for TrueNAS consistency when the drive is a non-boot drive.

(CC @phongn who mentioned “voodoo for partitioning under the hood.”)

EDIT: This user says that the 2GB swap partition is no longer created on initial setup? Need to hunt down the update where this is referenced: https://forums.truenas.com/t/how-to-create-a-swap-partiton/

EDIT2: Looks like maybe the 2GB swap partition was removed in 24.04.1: [Accepted] Create 2 GiB buffer space when adding a disk

EDIT3: “Remove creation/reporting/management of swap on TrueNAS (NAS-12887).”

I’ve since reconstructed my array; looking at lsblk shows no extra partitions for me, too. The max(2GiB, 1%) buffer also is in place:

user@truenas ~ ❯❯❯ lsblk -b

NAME        MAJ:MIN RM           SIZE RO TYPE MOUNTPOINTS
sda           8:0    0    21474836480  0 disk
├─sda1        8:1    0         524288  0 part
└─sda2        8:2    0    21458059264  0 part
sdb           8:16   0 14000519643136  0 disk
└─sdb1        8:17   0 13998372159488  0 part
sdc           8:32   0 14000519643136  0 disk
└─sdc1        8:33   0 13998372159488  0 part
sdd           8:48   0 14000519643136  0 disk
└─sdd1        8:49   0 13998372159488  0 part
...
nvme0n1     259:0    0   118410444800  0 disk
└─nvme0n1p1 259:1    0   117225553920  0 part
1 Like