Why do some of my drives show as sdX where others are scsi-XXX

Looking at zpool status, I see that some drives are shown as sdX and some are shown as scssi-XXX-part2.

All drives are SAS drives of the same manufacturer / model … why would they show up differently?

root@zfs[~]# zpool status
  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:17 with 0 errors on Sat Jan 25 03:46:18 2025
config:

	NAME                                                 STATE     READ WRITE CKSUM
	freenas-boot                                         ONLINE       0     0     0
	  mirror-0                                           ONLINE       0     0     0
	    ata-Micron_5100_MTFDDAV240TCB_MSA224803R7-part2  ONLINE       0     0     0
	    sda2                                             ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 06:47:02 with 0 errors on Sun Jan 19 06:47:05 2025
config:

	NAME                              STATE     READ WRITE CKSUM
	tank                              ONLINE       0     0     0
	  raidz2-0                        ONLINE       0     0     0
	    sdc2                          ONLINE       0     0     0
	    scsi-35000cca0992d9bec-part2  ONLINE       0     0     0
	    sdg2                          ONLINE       0     0     0
	    scsi-35000cca099325cf8-part2  ONLINE       0     0     0
	    sdf2                          ONLINE       0     0     0
	    sdd2                          ONLINE       0     0     0
	    scsi-35000cca099325d30-part2  ONLINE       0     0     0
	    sdm2                          ONLINE       0     0     0
	    sdh2                          ONLINE       0     0     0
	    sde2                          ONLINE       0     0     0
	    scsi-35000cca099325c80-part2  ONLINE       0     0     0
	    sdn2                          ONLINE       0     0     0
	logs
	  nvme0n1p1                       ONLINE       0     0     0

errors: No known data errors

When were these originally added?

Were there ever any replacements in the RAIDZ2 vdev?

Did you ever replace/add drives using the command-line?

These were in a FreeNAS 13 system that has since been upgraded to TrueNAS. I may have replaced 1 drive, but definitely 4 of them.

Somehow I cannot edit my response, but the “System” is the same … I just upgraded from FreeNAS 13 to TrueNAS.

The other thing that is weird is that this was a RAIDZ-2 setup … the way I am interpreting the above is this is a full on pool with no hot stand-by redundancy … is my interpretation correct?

Your pool has no spare, but it has two disks’ worth of redundancy–that’s what RAIDZ2 means.

1 Like

I think the mixed labels are a result of history - various versions of OpenZFS and FreeNAS/TrueNAS Core/Scale have created labels for new pools and drive replacements differently.

I suspect that there may be a way to get ZFS to relabel the drives using partuuids (as is now standard) but I don’t know what this would be.

Below is a zpool status from my backup system that is still on FreeNAS 13 … that shows consistent names albeit those being in GUID format.

	NAME                                            STATE     READ WRITE CKSUM
	tank                                            ONLINE       0     0     0
	  raidz2-0                                      ONLINE       0     0     0
	    gptid/5b49758a-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/5fc400e2-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/64ecbbe5-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/694c91fe-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/6dc7aeb9-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/72589755-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/76c69f8f-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/7b4db4ee-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/7fda7dc8-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/8451244a-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/8a3fddd5-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0
	    gptid/8ec113ca-8bd2-11e9-82c2-000c297ea340  ONLINE       0     0     0

It might be possible to do this, which requires stopping any shares/services that require the pool.

  1. Stop all services and shares that require or use the pool
  2. Export the pool from the GUI[1]
  3. Import the pool using the command-line:
    zpool import -R /mnt -d /dev/disk/by-partuuid tank
  4. Export the pool with the GUI again for good measure[1:1]
  5. Import the pool normally, with the GUI
  6. Check the results

This might not be worth it to you.


  1. Do not choose the option to “Delete configs and shares that use this pool”. Uncheck it. ↩︎ ↩︎

1 Like

Are you trying to give @Davvo a heart attack?

2 Likes

heh … is there an internal joke on that one? :slight_smile:

1 Like

Maybe.

2 Likes