SAS Hardware Sanity Check Please

Hi all, I’m a TrueNAS Scale newbie, and also new to SAS technology, but have 30+ years experience in networking and linux. I’ve built a TrueNAS Scale server with recycled hardware from our datacentre to experiment with (the guy who manages our SAN was very kind and helpful!). I’m not sure if this set up is optimal or even correct, so would appreciate any comments or advice.

The primary purpose of the system is to run a Plex media server and provide storage, but also run a few other apps such as a qBittorrent client and sonar, tautulli and overseerr.

The chassis is an HP DL380 Gen6 with 70GB of RAM and 2x Xeon E5540 @ 2.53GHz with a 2x1GbE LACP bonded interface as its primary network connection.

As the onboard SAS controller does not support IT mode it is disconnected from the pre-installed SAS expander card. Instead there is a Dell Perc H310 connected to the HP 6Gb SAS expander card using two internal SAS ports

The Dell HBA is in a x4 PCIe slot. I understand the card supports x8 and I may get better performance if I move it to the x8 slot (which currently has the SAS expander card installed).

(There is only one PCIe riser installed, and it has a single x8 slot and two x4 slots. I intend to install a GPU in one of the x4 slots to provide hardware Plex transcoding at a later date)

The SAS expander card is connected to the server’s two internal drive cages containing eight 1.6TB SSDs each. It’s external port is connected to a twenty-five slot HP disk shelf, fully populated with 1.6TB SSDs

(The SSDs came out of our Netapp SAN and were due to be shredded. It took some work to figure out how to reformat them for non-Netapp use, but I have a lot of fast storage to play with!)

In TrueNAS Scale all disks are visible and I have created a mirrored boot pool with a pair of disks, and Apps (for host path Apps storage) and Scratch/Downloads pools, also mirrored pairs. The main storage pool is one big 3x RAIDZ2 11 wide (33 disks total), with 36.4TB usable capacity. There are two SSDs currently unused.

I gave no thought to what disks were used for these vdevs, and whether they were on the external disk shelf, or the internal drive cages. Would this affect their performance? Is there any benefit from keeping disks in a vdev on the same port on the SAS expander, or from splitting them across different ports?

Basic speed tests on the big pool shows 2Gb/s throughput, which to be honest with I’m happy with.

As I said, any comments, suggestions or advice is gratefully accepted. I’ve spent the last 20 years running servers on hardware/virtualisation platforms run by other people. I’m learning a lot about SAS and PCIe at the moment :slight_smile:

Definetly swap the cards! The expander is not a PCI device and only uses a slot for power—it may even have a power connector to work outside of any PCI slot.

1 Like

Yeah, from what I understand HP put the expander card in that particular slot purely because it makes cable routing easier, which is pretty lame engineering imho.

I’ve actually ordered one of those simple male->female PCIe x16 expansion cables to use the empty, bare PCIe slot on the motherboard that’s intended for another riser card. If another of these machines heads for recycling soon I may see if I can recover another riser card from it.

So I moved the HBA to slot 1 in the GL380 G6; what a faff that was!

The cabling is, indeed, more convenient with the SAS expander in slot 1. I had to remove the piece of plastic that provides cable management from the SAS expander card as there was no room for it in its new slot, but I think I actually have the cabling neater than before :slight_smile:

The onboard P410i also started throwing errors and locking up the system after I’d made these change for some reason (stuck on “Initializing” at boot). I removed the cache card and battery and disabled it in BIOS as I’m not using it.

I’ve confirmed that the H310 HBA is using 8x lanes now. Output from lspci -vv:

		LnkCap:	Port 0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-

Some testing with fio shows ~820MB/s throughput on a mirrored pair of disks and ~1400-1520MB/s on the big pool I described up top.

As PCIe V.2 supposedly tops out at 4GB/s for half-duplex I suspect there is a bottle-neck somewhere else. It’s probably the ageing HBA, but as this chassis is limited to PCIe v.2 I’m not sure how I can improve on that.

edit the above is, of course, the limit of 6Gb/s SAS, so I’m running at or around wire-speed for the big pool. Result

Also: I tried installing the GPU in a x4 slot, but the slots have closed-ends and the card is x16, so it wouldn’t fit in. I tried installing it directly in the empty PCIe slot2 on the mobo, which worked, but it got in the way of installing the main card cage so wasn’t workable. Hopefully when I get that cable I’ll be able to fit it in somehow.