SAS9300-8i Detected in BIOS, but Disks Not Showing in TrueNAS GUI

Hi everyone,

I’m having some trouble getting my TrueNAS Scale server (Electric Eel ) to detect drives connected via a SAS9300-8i HBA (Avago/Broadcom)**. Here’s a breakdown of my setup and what I’ve tried so far:
System Setup:

  • Motherboard: ASUS Maximus IX Formula
  • HBA Card: SAS9300-8i (in IT Mode)
  • TrueNAS Version: Scale (Electric Eel)
  • Connection Type: SAS HBA connected to a disk array using DAC cables
  • Drives: Western Digital (WDC) drives in a 12 bay Silverstone disk array.
    Issue:

All the drives are detected in the BIOS and the Avago MPT Boot ROM successfully shows the connected drives.
However, when I log into TrueNAS, the drives are not visible in the Disks section of the web UI.
I checked the system logs and saw that the mpt3sas driver version 43.100.00.00** loaded successfully, but no additional errors or drive-related issues are showing up.

Any insight would be great on getting the this array added to and existing pool.
Thanks!

It works for me: Disconnect pool → Reboot (You may have to restart several times) → Import pool.

How is your disk array connected, out of curiosity?

Intel P4610 connected to a U.2 to PCIE card(I forgot the model :face_in_clouds:).

The UI can get out of step with the O/S and the best way to see whether this is the case is by running some shell commands to avoid the UI layer.

Please run the following commands and copy and paste the results:

  • lsblk -bo NAME,MODEL,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
  • lspci
  • sas2flash -list
  • sas3flash -list

root@truenas[~]# lsblk -bo NAME,MODEL,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL PTTYPE TYPE START SIZE PARTTYPENAME PARTUUID
sda WDC WD121KFBX-68EF5N0 gpt disk 12000138625024
├─sda1 gpt part 2048 2147484160 Linux swap a41f2b6b-3a64-4918-a6ad-1d81d76582ae
└─sda2 gpt part 4198400 11997989027328 Solaris /usr & Apple ZFS d6410dff-1d88-4e01-8c22-96f3afd6c5a0
sdb WDC WD120EFBX-68B0EN0 gpt disk 12000138625024
├─sdb1 gpt part 2048 2147484160 Linux swap d9a4ca88-b8f3-4cee-a479-84e8c300f6a2
└─sdb2 gpt part 4198400 11997989027328 Solaris /usr & Apple ZFS 40930ae2-a0e3-4d09-b83b-0ba7dffa2867
sdc WDC WD101EFBX-68B0AN0 gpt disk 10000831348736
├─sdc1 gpt part 2048 2147484160 Linux swap 28393d9e-5b2e-46a6-a5e4-7715b0e523a6
└─sdc2 gpt part 4198400 9998680719872 Solaris /usr & Apple ZFS 9bd10937-5bc5-4ce0-8274-a395e90ea5fc
sdd WDC WD121KFBX-68EF5N0 gpt disk 12000138625024
├─sdd1 gpt part 2048 2147484160 Linux swap 3754b971-7203-4d71-b94a-df50c6d2e42a
└─sdd2 gpt part 4198400 11997989027328 Solaris /usr & Apple ZFS 43b4913e-593c-42c5-bc8c-453d9e8fa4e1
sde WDC WD101EFBX-68B0AN0 gpt disk 10000831348736
├─sde1 gpt part 2048 2147484160 Linux swap bea2136f-cde1-4c66-8d01-8f4eb6c187b4
└─sde2 gpt part 4198400 9998680719872 Solaris /usr & Apple ZFS d88ce92e-fda4-424d-a7db-1d4f1e79be89
sdf WDC WD121KFBX-68EF5N0 gpt disk 12000138625024
├─sdf1 gpt part 2048 2147484160 Linux swap 2b906965-46c8-4ea8-8cee-abc075e83478
└─sdf2 gpt part 4198400 11997989027328 Solaris /usr & Apple ZFS 29a1b875-996a-413f-b50a-2c890c1785e2
nvme0n1 WD Red SN700 1000GB gpt disk 1000204886016
├─nvme0n1p1 gpt part 4096 1048576 BIOS boot 3bfe703a-a643-48f7-9def-a71db87ead08
├─nvme0n1p2 gpt part 6144 536870912 EFI System 5c4a0662-e077-481e-8b54-48bd4a356dec
└─nvme0n1p3 gpt part 1054720 999664852480 Solaris /usr & Apple ZFS fdf9da84-2af4-4469-becb-6e837754cc06

root@truenas[~]# lspci
00:00.0 Host bridge: Intel Corporation Device a704 (rev 01)
00:01.0 PCI bridge: Intel Corporation Raptor Lake PCI Express 5.0 Graphics Port (PEG010) (rev 01)
00:06.0 PCI bridge: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port (rev 01)
00:0a.0 Signal processing controller: Intel Corporation Raptor Lake Crashlog and Telemetry (rev 01)
00:0e.0 RAID bus controller: Intel Corporation Volume Management Device NVMe RAID Controller Intel Corporation
00:14.0 USB controller: Intel Corporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller (rev 11)
00:14.2 RAM memory: Intel Corporation Alder Lake-S PCH Shared SRAM (rev 11)
00:14.3 Network controller: Intel Corporation Alder Lake-S PCH CNVi WiFi (rev 11)
00:15.0 Serial bus controller: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #0 (rev 11)
00:15.1 Serial bus controller: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #1 (rev 11)
00:15.2 Serial bus controller: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #2 (rev 11)
00:16.0 Communication controller: Intel Corporation Alder Lake-S PCH HECI Controller #1 (rev 11)
00:17.0 SATA controller: Intel Corporation Alder Lake-S PCH SATA Controller [AHCI Mode] (rev 11)
00:1a.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 (rev 11)
00:1b.0 PCI bridge: Intel Corporation Device 7ac0 (rev 11)
00:1c.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #1 (rev 11)
00:1c.1 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #2 (rev 11)
00:1c.2 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port (rev 11)
00:1c.4 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #5 (rev 11)
00:1d.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #9 (rev 11)
00:1f.0 ISA bridge: Intel Corporation Z690 Chipset LPC/eSPI Controller (rev 11)
00:1f.3 Audio device: Intel Corporation Alder Lake-S HD Audio Controller (rev 11)
00:1f.4 SMBus: Intel Corporation Alder Lake-S PCH SMBus Controller (rev 11)
00:1f.5 Serial bus controller: Intel Corporation Alder Lake-S PCH SPI Controller (rev 11)
01:00.0 VGA compatible controller: NVIDIA Corporation GP104GL [Quadro P5000] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
02:00.0 Non-Volatile memory controller: Sandisk Corp WD Black SN750 / PC SN730 NVMe SSD
03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
06:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
07:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03)

root@truenas[~]# sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

    No LSI SAS adapters found! Limited Command Set Available!
    ERROR: Command Not allowed without an adapter!
    ERROR: Couldn't Create Command -list
    Exiting Program.

root@truenas[~]# sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02)
Copyright 2008-2017 Avago Technologies. All rights reserved.

    No Avago SAS adapters found! Limited Command Set Available!
    ERROR: Command Not allowed without an adapter!
    ERROR: Couldn't Create Command -list
    Exiting Program.

Do you see your U.2 PCIe card in the lspci output?

If not, that is likely to be the problem - that there is a hardware incompatibility or TrueNAS does not have the drivers you need for it.

I find it odd that your 3008-series card doesn’t show up when trying to list the firmware.
Is there any error related to the card in dmesg?

I would have expected something akin to this in your lspci:

01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)

Maybe the full output of sudo dmesg | grep mpt3sas will be helpful.

1 Like

So might be a little confusion on this, My truenas server is separate from my disk array and is connected via DAC. Idk if that makes a difference in terms of visibility on the truenas side of it. Also nothing is displayed for the dmesg grep mpt3sas.

Yeah doesn’t look like its there, there wouldn’t happen to be a list somewhere of compatible or a way to install the driver manually?

root@truenas[~]# sudo dmesg | grep mpt3sas
[56030.994535] mpt3sas version 43.100.00.00 loaded

Yes, some confusion.
Can you please clearly explain how you cabled this up?

I wasn’t aware you could use a DAC to connect a disk shelf like that. Typically I would expect something like an SFF-8088 or SFF-8644, for external use. I admit it’s confusing that you also mention the SAS9300-8i, which is a model with internal ports only, but that can be worked around with some nifty cabling or an SFF-8087 to SFF-8088 style adapter card.

If you have a bare minimum disk array with only a network cable (the DAC) going to it from your main TrueNAS-box, besides power, I don’t see how this would work at all.

1 Like

Truenas scale box has a dual SFP card on it and the Disk array has a single. The disk array has the LSI Broadcom SAS 9300-8i 8-port 12Gb/s SATA+SAS PCI-Express 3.0 Low Profile Host Bus Adapter in it currently. I was under the impression i could use a DAC to connect them, if that’s not the case then that would explain a lot.

Correct, that’s not going to work.

I think everyone here assumed that the HBA was in the TrueNAS box and that you had something HBA-like in the disk shelf with an SFF-cable going between the two.

The disk shelf is not going to be able to share any data over the DAC like that. Not sure how to use the disk shelf because I am not sure what model you actually have.

You might be a single HBA + SFF-cable away from getting this to work. It’s also possible all you’re missing is the SFF-cable, if you already have an HBA in your TN-box as well.

1 Like

Ok, well i appreciate you clearing that up for me. Next question, so because the current card is all internal is it pretty much null and void then? Should i just grab two SFF-8643 to SFF-8088 adapter card, one for the Truenas and one for the Disk array?

Before I answer that I rather know more about the components in your systems.

Is it correct to say that you have two systems.

The one with the ASUS Maximus IX Formula MB has TrueNAS installed.
Does this system have the aforementioned HBA?

The other system is the Silverstone disk array. Does this have any OS installed that you know of or is it indeed a “dumb disk array”? Is it essentially a SAS expander backplane, a small controller board to handle how the PSU powers the drives and ideally some kind of SFF-connector/connectors on the back that internally go to the backplane for easy access?

If the above is correct you have two realistic cabling options, you either use an adapter card that fits in a PCIe slot (but doesn’t need to be powered AFAIK) plus the appropriate SFF-cable, or you pull an appropriate “adapter”-SFF-cable through an empty PCIe slot and throw neatness out the window.

The kind of cable really depends on what ports your Silverstone-box has, the only known part right now is the HBA-card, which takes SFF-8643.

So the Disk array has the ASUS maximus in it, it is indeed a dumb disk array.
https://www.silverstonetek.com/en/product/info/server-nas/RM22-312/ the backplane is currently connected to the LSI Broadcom SAS 9300-8i 8-port 12Gb/s SATA+SAS PCI-Express 3.0 Low Profile Host Bus Adapter. Both units have a SFP+ connection.

Okay, unfortunately I am more and more confused by your posts.

If the Silverstone chassis has a full motherboard in it, it’s not a dumb disk array, it’s a whole separate computer that needs it’s own CPU, RAM and OS to be functional. At that point just install TrueNAS on that system and be done with it.

If the Silverstone box is the one with the ASUS MB I know exactly nothing about your other box that you want to connect to the Silverstone one.

Please start over.
Clearly and thoroughly describe the hardware you have, if there’s even more than one system and all the components in each chassis.
Also explain what your end goal with the system(s) is.

Maybe I will come back tomorrow and look at this again. Maybe.

Two separate systems, one is my truenas and one is my disk array. Didn’t realize dumb disk array means no internal hardware…Apologies…The disk array is meant to feed into the truenas box providing extra space that’s all. No OS, just space. Yes the disk array has everything a normal server has but nothing more. I’m just looking to use it as additional disk space for my pool.

As neofusion said please list all your hardware in the 2 computers. case, motherboard, ram, cpu, drives, controllers, gpus, OS. Maybe then someone can help you setup an operational server.