Disk Names in zpool status (lack of /etc/zfs/vdev_id.conf support)?

My disk names look like this:

  pool: fast
 state: ONLINE
  scan: resilvered 1.79M in 00:00:00 with 0 errors on Sun May 11 22:41:04 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        fast                                      ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            1edb3237-5414-4b26-9332-5a74b9d2d13c  ONLINE       0     0     0
            8e8826fa-44a5-4814-956a-ee01401226f6  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            a48011cc-aca2-401f-8294-37bc8e9e9d6f  ONLINE       0     0     0
            2678bfb0-e91b-4510-9bc9-a703fedd48c6  ONLINE       0     0     0

errors: No known data errors

  pool: rust
 state: ONLINE
config:

        NAME                                      STATE     READ WRITE CKSUM
        rust                                      ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            1b3cd88f-4a12-4ae6-8b04-05318c8519cf  ONLINE       0     0     0
            8f4ce5b1-4ca5-4eab-b3b5-9b800cf782ab  ONLINE       0     0     0
            eba7bd0f-4b75-4d42-84fb-eb75eee16a88  ONLINE       0     0     0
            fc870b89-2592-4066-96c6-5e4897f1a168  ONLINE       0     0     0
            12404dd4-4d87-4038-838c-c60796116136  ONLINE       0     0     0
            931437b1-01d1-4ea6-aea0-9993b465fa36  ONLINE       0     0     0
        cache
          d4cb4922-db0e-4298-8f87-5a5e0eef9d45    ONLINE       0     0     0

i would prefer them to refelct the drives (to make it easier when i have an issue and will be running around with my hair on fire).

yes i know can export the pool and import by-id the issue with that is there are multiple symlinks per drive in /dev/disk/by-id and it sometimes picks silly non-useful ones

i went down the path of /dev/disk/by-vdev with a /etc/zfs/vdev_id.conf route

but it seems that truenas doesn’t ship with the required tools or create the require symlinks

is there a good reason why truenas doesn’t support this?
is there a good alternative workaround?

Try sudo zpool status -LP.

And yes, there is a (good) reason why TrueNAS uses partuuids when defining vDev members.

cool, i would love to learn what those are

well i did that before i posted, its no better, as these names are dynamic and can change each boot on linux this is why a hard mapping (using the file) between uuid/guid and a friendly name is prefered and recommened by upstream zfs folks.

i will see if there is or add a feature request

 state: ONLINE
  scan: resilvered 1.79M in 00:00:00 with 0 errors on Sun May 11 22:41:04 2025
config:

        NAME                 STATE     READ WRITE CKSUM
        fast                 ONLINE       0     0     0
          mirror-0           ONLINE       0     0     0
            /dev/sdh1        ONLINE       0     0     0
            /dev/sdi1        ONLINE       0     0     0
          mirror-1           ONLINE       0     0     0
            /dev/nvme10n1p1  ONLINE       0     0     0
            /dev/nvme8n1p1   ONLINE       0     0     0

errors: No known data errors

  pool: rust
 state: ONLINE
config:

        NAME              STATE     READ WRITE CKSUM
        rust              ONLINE       0     0     0
          raidz2-0        ONLINE       0     0     0
            /dev/sdb1     ONLINE       0     0     0
            /dev/sdc1     ONLINE       0     0     0
            /dev/sdd1     ONLINE       0     0     0
            /dev/sde1     ONLINE       0     0     0
            /dev/sdf1     ONLINE       0     0     0
            /dev/sdj1     ONLINE       0     0     0
        cache
          /dev/nvme2n1p1  ONLINE       0     0     0

Looks like you answered your own question… :tada:

not really, as neither of these have been answered

is there a good reason why truenas doesn’t support this?
is there a good alternative workaround

(except maybe the last one is ‘no’ ?)

Instead of trying to change the default behaviour of the system why not use something like this to make better sense of the system when its disk identification time.

lsblk -o NAME,model,tran,size,serial,label,partuuid,path | sed -e "`ls -1cd /sys/class/enclosure/*/*/device/block/*|sed "s+.*enclosure/\(.*\)/device/block/\(.*\)+s-\2\\$-\2 \1-+"`" | sort -t ':' -k 3

thanks for the useful suggestion, that gives me some paths to explore!

because i would have no idea how to write an awk statement, or do any coding… and no idea the enclosure path existed

but you made a nice suggestions and with some chatgpt assisted coding later and i now have a script that outputs this (blank zfs details means disk not yet used in ZFS) - so many thanks!

NAME       MODEL                        TRAN   SIZE     SERIAL               PATH               POOL       TYPE       GUID                                
sdb        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sdb           rust       'disk'     15092755422561716895                
sdc        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sdc           rust       'disk'     4022156489591843608                 
sdd        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sdd           rust       'disk'     16272000233204181760                
sde        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sde           rust       'disk'     2296675024925190545                 
sdf        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sdf           rust       'disk'     1043860037485403250                 
sdg        ST4000VN000-1H4168           sata   3.6T     xxxxxxxx             /dev/sdg                                                                     
sdh        ST4000VN000-1H4168           sata   3.6T     xxxxxxxx             /dev/sdh                          
sdi        ST4000VN000-1H4168           sata   3.6T     xxxxxxxx             /dev/sdi                          
sdj        ST24000NT002-3N1101          sata   21.8T    xxxxxxxx             /dev/sdj           rust       'disk'     10764108644732859737                
nvme4n1    INTEL SSDPE21D015TA          nvme   1.4T     xxxxxxxx             /dev/nvme4n1                                                                 
nvme2n1    ADATA SX8200PNP              nvme   1.9T     xxxxxxxx             /dev/nvme2n1                                                                 
nvme6n1    INTEL SSDPE21D015TA          nvme   1.4T     xxxxxxxx             /dev/nvme6n1                                                                 
nvme0n1    KINGSTON SEDC2000BM8960G     nvme   894.3G   xxxxxxxx             /dev/nvme0n1                                                                 
nvme5n1    INTEL SSDPE21D960GA          nvme   894.3G   xxxxxxxx             /dev/nvme5n1                                                                 
nvme7n1    Seagate ZP4000GM30063        nvme   3.6T     xxxxxxxx             /dev/nvme7n1                                                                 
nvme3n1    INTEL SSDPE21D960GA          nvme   894.3G   xxxxxxxx             /dev/nvme3n1                                                                 
nvme10n1   Seagate ZP4000GM30063        nvme   3.6T     xxxxxxxx             /dev/nvme10n1      fast       'disk'     11559379796082310176              
nvme9n1    Seagate ZP4000GM30063        nvme   3.6T     xxxxxxxx             /dev/nvme9n1                                                                 
nvme1n1    KINGSTON SEDC2000BM8960G     nvme   894.3G   xxxxxxxx             /dev/nvme1n1                                                                 
nvme8n1    Seagate ZP4000GM30063        nvme   3.6T     xxxxxxxx             /dev/nvme8n1       fast       'disk'    10845657097477000304                  

as to why i would ask for a standard ZFS feature to be on a system using standard ZFS - that would seem an obvious, simple, ask.