SSDs show under Disks but not available for pool creation

Hi all,

I’m trying to add two brand new SSD drives to my TrueNAS Core 13.0 system. The drives show up in the web UI under Storage / Disks, but when I try to use them to create a pool, the disks don’t show up and I get an error message.

  • The SSDs are Verbatim Vi550 S3 128 GB.
  • The disks are connected with SATA cables straight to the MB (SATA ports).
  • One of the SATA cables and MB ports were used by another disk without problems just before (WD Gold).
  • Running long SMART test on both disks from the UI shows successful.

The error I’m getting

Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1240, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/disk_/availability.py", line 21, in get_unused
    reserved = await self.middleware.call('disk.get_reserved')
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1283, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1240, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/disk_/availability.py", line 44, in get_reserved
    reserved += [i async for i in await self.middleware.call('pool.get_disks')]
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/disk_/availability.py", line 44, in <listcomp>
    reserved += [i async for i in await self.middleware.call('pool.get_disks')]
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1059, in get_disks
    disk_path = os.path.join('/dev', d['devname'])
KeyError: 'devname'

All clues of what I can try next are much appreciated!

Interesting. Maybe there’s a problem with “residual” device name of previously connected drives to those ports?

What is the output of:

zpool status -v

And:

camcontrol devlist

You might have to prepend with sudo for the second command if you’re not logged in as root.

Thanks for trying to help! I’m starting to wonder if there’s a hardware incompatibility with the SSDs somehow.

root@freenas:~ # zpool status -v
  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:10:43 with 0 errors on Sat Apr 20 03:55:43 2024
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            da0p2     ONLINE       0     0     0
            da1p2     ONLINE       0     0     0

errors: No known data errors

and

root@freenas:~ # camcontrol devlist
<AHCI SGPIO Enclosure 2.00 0001>   at scbus4 target 0 lun 0 (ses0,pass0)
<WDC WD60EFRX-68MYMN1 82.00A82>    at scbus5 target 0 lun 0 (ada0,pass1)
<Verbatim Vi550 S3 V1027A0>        at scbus6 target 0 lun 0 (ada1,pass2)
<WDC WD60EFRX-68MYMN1 82.00A82>    at scbus7 target 0 lun 0 (ada2,pass3)
<Verbatim Vi550 S3 V1027A0>        at scbus8 target 0 lun 0 (ada3,pass4)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus9 target 0 lun 0 (ses1,pass5)
<SanDisk Ultra Fit 1.00>           at scbus10 target 0 lun 0 (da0,pass6)
<USB SanDisk 3.2Gen1 1.00>         at scbus11 target 0 lun 0 (da1,pass7)

I’m in the process of moving from USB flash drives to SSDs for my boot pool (since this in recommended, and also because I want a larger boot pool in order to fit the System Dataset there). I’m also migrating from a smaller pair of disks with GELI encryption to a larger pair of disks with ZFS encryption. Since I’ve been rebooting a lot to get the SSDs working, I haven’t decrypted the Geli storage pool, and I think this is why it doesn’t show among the pools.

What action triggers this error message to pop up?

Simply visiting the pool creation page?

The first error I posted appears every time I do the following:

  1. Go to Storage / Pools.
  2. Click Add.
  3. Click Create Pool.
  4. Error pops up.

There’s also another error when I try to use a disk to replace a bood pool disk:

  1. Go to System / Boot.
  2. Actions → Boot Pool Status
  3. Click … for one of the disks → Replace
  4. A page appears where it says Member Disk on a drop-down. But the drop-down is empty. Nothing can be selected.
  5. Click Submit
  6. The following error pops up:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1240, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 980, in nf
    args, kwargs = clean_and_validate_args(args, kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 938, in clean_and_validate_args
    value = attr.clean(args[args_index + i])
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 170, in clean
    value = super(Str, self).clean(value)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 42, in clean
    value = super().clean(value)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 79, in clean
    raise Error(self.name, 'null not allowed')
middlewared.schema.Error: [dev] null not allowed

Very likely a bug.

Not sure if it’s a “known” bug with an obvious fix that someone might know off the top of their head, or if it requires a bug report ticket.

@kris @Captain_Morgan Does this look familiar?


This part in the error message seems peculiar, as if there’s a conflict with the device names?

1 Like

I found the same error in a bug (but here it happened when someone tried to remove a pool): [NAS-122618] - iXsystems TrueNAS Jira

The fix seemed related to encryption, so I tested decrypting my Geli storage pool, and now both the above errors are gone (create pool / swap boot pool disk).

It’s very possible that this bug is fixed in 13.1 (where the bug fix is heading), but it could also be a bug that only shows up with Geli, which I know should be avoided anyway (and I’m trying to get away from :slight_smile: ).

Thanks for the pointer about bugs. It got me looking in the right direction!

1 Like