I expanded my 3x4tb raidz1 with an extra 4tb disk (/dev/sdd), using the command
sudo zpool attach raidz1-0 sdd
and it resulted in the following screen:
kauedg@truenas:/$ sudo zpool status Pool_raidz
pool: Pool_raidz
state: ONLINE
scan: scrub repaired 0B in 12:22:32 with 0 errors on Sun Jun 29 12:22:34 2025
expand: expansion of raidz1-0 in progress since Wed Jul 2 13:40:35 2025
154G / 14.1T copied at 286M/s, 1.07% done, 14:09:44 to go
config:
NAME STATE READ WRITE CKSUM
Pool_raidz ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
4c5866a3-0f46-11eb-adda-448a5bbaa63d ONLINE 0 0 0
4e4b5bf5-0f46-11eb-adda-448a5bbaa63d ONLINE 0 0 0
4e5abd39-0f46-11eb-adda-448a5bbaa63d ONLINE 0 0 0
sdd ONLINE 0 0 0
The devices’ blkid output are
kauedg@truenas:/$ sudo blkid /dev/sd*
/dev/sda: PTUUID="9e1676aa-bebd-4e3e-b679-6c3f95a3129a" PTTYPE="gpt"
/dev/sda1: PARTUUID="a469481f-b173-449d-bd21-78fa2fb1afbe"
/dev/sda2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="EBCE-2E5F" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b862614c-6bf3-4eec-8f7d-5a54f8951f10"
/dev/sda3: LABEL="boot-pool" UUID="4651888855667083951" UUID_SUB="5893104428331296677" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="d0c1de3d-efb3-4e08-8252-6f723e4c8844"
/dev/sdb: PTUUID="86103395-55c7-4fa6-ac28-80c7727cd02a" PTTYPE="gpt"
/dev/sdb1: PARTUUID="c4122ef1-e716-428d-82cd-ff39835bee8f"
/dev/sdb2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="EBE9-3F71" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0c97b71b-f243-4329-9762-a87aeec34c31"
/dev/sdb3: LABEL="boot-pool" UUID="4651888855667083951" UUID_SUB="9572762783737562134" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="96985d0d-e6f0-4d9d-b92d-1c129a0361e6"
/dev/sdc: PTUUID="4e1ea49f-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sdc1: PARTUUID="4e395883-0f46-11eb-adda-448a5bbaa63d"
/dev/sdc2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="14840563665585257997" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4e5abd39-0f46-11eb-adda-448a5bbaa63d"
/dev/sdd: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="5711433251881008015" BLOCK_SIZE="4096" TYPE="zfs_member"
/dev/sde: PTUUID="4e171a94-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sde1: PARTUUID="4e2b3c2f-0f46-11eb-adda-448a5bbaa63d"
/dev/sde2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="3505380738761381403" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4e4b5bf5-0f46-11eb-adda-448a5bbaa63d"
/dev/sdf: PTUUID="4c1e08b2-0f46-11eb-adda-448a5bbaa63d" PTTYPE="gpt"
/dev/sdf1: PARTUUID="4c3f2c57-0f46-11eb-adda-448a5bbaa63d"
/dev/sdf2: LABEL="Pool_raidz" UUID="14721573430902728666" UUID_SUB="14013723510213771531" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="4c5866a3-0f46-11eb-adda-448a5bbaa63d"
The pool’s disks are attached to an HBA, in passthrough mode. My problem is that ESXi sometimes shuffles disks device name (/dev/sdX) and I’m afraid if I reboot, TrueNAS won’t be able to find the /dev/sdd device.
- The pool was created on a TrueNAS13 system (pool was upgraded already) and the 3 original pool disks have 2 partitions, like this:
Device Start End Sectors Size Type
/dev/sde1 128 4194431 4194304 2G FreeBSD swap
/dev/sde2 4194432 11721045127 11716850696 5.5T FreeBSD ZFS
but the newly attached disk has no partitions, so I can’t reference /dev/sdd PARTUUID attribute like the other disks. Is it possible to change every device raidz1-0 reference to the UUID attribute? Or alternatively, can I change only the “sdd” reference to the disk’s UUID?
I’ve tried importing the pool via CLI, but I get an error and there’s nothing in the /mnt/Pool_raidz, except some basic user dir.
$ sudo zpool import -d /dev/disk/by-partuuid/ Pool_raidz
cannot mount '/Pool_raidz': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets
System version
$ uname -a
Linux truenas 6.12.15-production+truenas #1 SMP PREEMPT_DYNAMIC Mon May 26 13:44:31 UTC 2025 x86_64 GNU/Linux
$ cat /etc/version
25.04.1