Change raidz reference to disk

I expanded my 3x4tb raidz1 with an extra 4tb disk (/dev/sdd), using the command

sudo zpool attach raidz1-0 sdd

and it resulted in the following screen:

kauedg@truenas:/$ sudo zpool status Pool_raidz
  pool: Pool_raidz
 state: ONLINE
  scan: scrub repaired 0B in 12:22:32 with 0 errors on Sun Jun 29 12:22:34 2025
expand: expansion of raidz1-0 in progress since Wed Jul  2 13:40:35 2025
        154G / 14.1T copied at 286M/s, 1.07% done, 14:09:44 to go
config:

        NAME                                      STATE     READ WRITE CKSUM
        Pool_raidz                                ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            4c5866a3-0f46-11eb-adda-448a5bbaa63d  ONLINE       0     0     0
            4e4b5bf5-0f46-11eb-adda-448a5bbaa63d  ONLINE       0     0     0
            4e5abd39-0f46-11eb-adda-448a5bbaa63d  ONLINE       0     0     0
            sdd                                   ONLINE       0     0     0

The devices’ blkid output are

kauedg@truenas:/$ sudo blkid /dev/sd*
/dev/sda: PTUUID="9e1676aa-bebd-4e3e-b679-6c3f95a3129a" PTTYPE="gpt"
/dev/sda1: PARTUUID="a469481f-b173-449d-bd21-78fa2fb1afbe"
/dev/sda2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="EBCE-2E5F" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b862614c-6bf3-4eec-8f7d-5a54f8951f10"
/dev/sda3: LABEL="boot-pool" UUID="4651888855667083951" UUID_SUB="5893104428331296677" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="d0c1de3d-efb3-4e08-8252-6f723e4c8844"


/dev/sdb: PTUUID="86103395-55c7-4fa6-ac28-80c7727cd02a" PTTYPE="gpt"
/dev/sdb1: PARTUUID="c4122ef1-e716-428d-82cd-ff39835bee8f"
/dev/sdb2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="EBE9-3F71" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0c97b71b-f243-4329-9762-a87aeec34c31"
/dev/sdb3: LABEL="boot-pool" UUID="4651888855667083951" UUID_SUB="9572762783737562134" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="96985d0d-e6f0-4d9d-b92d-1c129a0361e6"


/dev/sdc: PTUUID="4e1ea49f-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sdc1: PARTUUID="4e395883-0f46-11eb-adda-448a5bbaa63d"
/dev/sdc2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="14840563665585257997" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4e5abd39-0f46-11eb-adda-448a5bbaa63d"


/dev/sdd: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="5711433251881008015" BLOCK_SIZE="4096" TYPE="zfs_member"


/dev/sde: PTUUID="4e171a94-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sde1: PARTUUID="4e2b3c2f-0f46-11eb-adda-448a5bbaa63d"
/dev/sde2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="3505380738761381403" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4e4b5bf5-0f46-11eb-adda-448a5bbaa63d"


/dev/sdf: PTUUID="4c1e08b2-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sdf1: PARTUUID="4c3f2c57-0f46-11eb-adda-448a5bbaa63d"
/dev/sdf2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="14013723510213771531" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4c5866a3-0f46-11eb-adda-448a5bbaa63d"

The pool’s disks are attached to an HBA, in passthrough mode. My problem is that ESXi sometimes shuffles disks device name (/dev/sdX) and I’m afraid if I reboot, TrueNAS won’t be able to find the /dev/sdd device.

  1. The pool was created on a TrueNAS13 system (pool was upgraded already) and the 3 original pool disks have 2 partitions, like this:
Device       Start         End     Sectors  Size Type
/dev/sde1      128     4194431     4194304    2G FreeBSD swap
/dev/sde2  4194432 11721045127 11716850696  5.5T FreeBSD ZFS

but the newly attached disk has no partitions, so I can’t reference /dev/sdd PARTUUID attribute like the other disks. Is it possible to change every device raidz1-0 reference to the UUID attribute? Or alternatively, can I change only the “sdd” reference to the disk’s UUID?

I’ve tried importing the pool via CLI, but I get an error and there’s nothing in the /mnt/Pool_raidz, except some basic user dir.

$ sudo zpool import -d /dev/disk/by-partuuid/ Pool_raidz
cannot mount '/Pool_raidz': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets

System version

$ uname -a
Linux truenas 6.12.15-production+truenas #1 SMP PREEMPT_DYNAMIC Mon May 26 13:44:31 UTC 2025 x86_64 GNU/Linux

$ cat /etc/version 
25.04.1

Try reading through this other thread. It was simailar to what you describe.
You should use the TrueNAS GUI, if possible, to do tasks. Using the CLI can get items out of sync with the GUI and middleware. How does your GUI look right now? Is it okay on the information provided for these disks and pool, Just curious.

2 Likes

Got it all done but it was interesting. Exported via GUI, then tried to import though GUI but no pool to import. Went to cli and did an import, that worked showing UUID, when back into GUI but no pool loaded. Back to CLI and did an export again, back into GUI and then the pool showed as available for import. When though this on both systems. But now I am all PARTUUID. Thanks to everyone again!!

1 Like

Thanks, that post did not come up in my searches. Right now I’m trying to export the pool hoping it will behave like rgranger’s pool did in the other thread. unfortunately, due to my meddling with the CLI, everytime the system is restarted the pool is reimported. I tried combinations of CLI/GUI import/export but it always comes back after boot, when it shouldn’t. As soon as I figure it out I’ll reply back.

Kids, don’t mess with pools and CLI, trust the GUI.

Removed every SMB shared folder, every app and even users’ home dirs set to the pool and I was able to actually export it. But regarding the name of the device, nothing changed after importing/exporting many times, rebooting and so on.

Although, the VM is booting and it’s disks are not getting mixed anymore, so I’ll just leave it how it is.

It’s still a mystery why /dev/sdd has no partitions at all.

1 Like

What version of Scale? What partitions created when adding a drive, were removed with the removal of Swap. A buffer section is created now upon adding a drive due to the issue of slightly smaller drives, if a replacement was needed. It was seen on SSDs and HDs

I’ve dealt with this in TrueNAS after swapping a disk. ZFS tracks devices by GUID, not by name like ada0, so changing device names won’t affect pool integrity. If you’re seeing old references, zpool status usually shows the mapping. To clear legacy labels or remap, use zpool labelclear cautiously—but always back up first. GUI will update after a scrub or reboot.

It’s on the bottom of my first post: 25.04.01

I’m leaving it as it is, for now. To run labelclear I’d have to remove the disk from the pool, run the command and add it back. It would take a long time and I don’t have enough free space. When I reboot, the pool is imported without problems.

Maybe that “sdd” shown on the status output is just that, a label, and under the hood TrueNAS is using GUID or other unique identifier.

I feel your pain, and I was sweating buckets when my pool didn’t show up in the import drop-down. Is your pool straight raidz2 or is it multiple Vdevs?

You must not add raw disk devices to a zpool with TrueNAS. You must create a GPT partition table and refer to the partition by UUID.

You can probably offline the disk (losing your redundancy), properly partition it, then perform a replace operation.

1 Like