Not getting expected performance from SSD's

Hi,

I’m looking for some help and suggestions I’ve looked all over the forums and found some topics that helped, but I am still unable to achieve the performance I would expect on my home lab NAS

I’ll start off with some of the build details

Motherboard: MSI B550-A PRO
CPU: AMD Ryzen 5 550 6 Core AM4
RAM: 16 GB DDR4
RAID Card: LSI 9300-16i

  • BIOS: 18.00.00.00
  • Firmware: 16.00.14.00
  • Temp: 58c
  • PCIe Speed: x8 (in a 3.0 16x slot)
  • 6pin additional pwr is connected

700w power supply

16x Samsung MZILS3T8HMLH-000G3

  • 3.8 TB /12Gbps
  • sequential read 1400 MB/s
  • sequential write 930 MB/s

The above SSD’s are connected to the raid controller with 4x the below connector:
SFF-8643 Internal Mini SAS HD to 4 x 29pin SFF-8482 Connector 15pin Power Cable

OS is on a separate Samsung 500GB SSD running TrueNAS-13.0-U6.1

All 16x SSD’s are in one pool running RAIDZ3
I’ve set the zfs secondary cache to be metadata

Running the below results in
dd if=/dev/zero of=/mnt/data/media/data.data bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 58.569344 secs (1833282983 bytes/sec)
which is 14.6 Gbps

However when I copy a file via SMB to the same folder windows shows the speed as 1 GB/s with minimal fluctuation

Originally I was CPU bottle necked as I was running an intel CPU/Board from 2010, after upgrading to the AMD it doubled the speed and when performing the above mentioned transfers the CPU is at 50% utilization

I do have these Auxiliary parameters for the SMB service which got me to the speeds above

  • server multi channel support = yes
  • aio max threads = 100
  • aio read size = 1
  • aio write size = 1
  • allocation roundup size = 1048576

I’m am new to the Truenas / home built NAS space, I’m not sure if this performance is good for the hardware? compared to a ReadyNAS 416 running 6x8TB HHD’s the copy to self performance is 14GB/s via SMB

I would have though that these SSD’s would have at least saturated the 8x PCIe Lane?

Am I missing something simple?

Couple of thoughts:
Is that LSI HBA / RAID card still running in RAID mode or did you cross-flash it to IT mode? The latter is preferable, as HBA mode makes the SSDs directly addressable by TrueNAS vs. abstracting them away. I presume you simply wrote RAID even though you’re using it as a HBA?

A 16-wide Z3 VDEV may be a minor part of the problem - that is a lot of parity to spread across a lot of drives, even if they are all enterprise-grade SSDs. The usual max recommended drives in a VDEV is 12.

I am also unsure why you’d have a secondary cache with an all-flash pool. Is the SSD you’re using for metadata caching significantly “faster” re: reads than the general pool drives? If not, it won’t help. Furthermore, a L2ARC is not advisable for RAM configs before 64GB since it eats into ARC memory. I’d ditch the L2ARC.

But your biggest issue is the “1x Realtek® 8111H Gigabit LAN controller” on the motherboard. It is based on a gigabit-speed Realtek chip, so the most you’ll ever get out of it is 1 Giga bits/s, not bytes/s as implied in your posting (capitalization matters). My suggestion is a genuine 10GbE or 25GbE Intel SFP+ PCIe card that can bypass the network bottleneck imposed by the Realtek chip.

With a 10GbE card, proper network, etc. you should have no trouble getting into 1GB/s territory, depending on use case, workflow and so on. My single VDEV/sVDEV “fusion” pool consisting of HDDs and SSDs regularly runs at several hundred MB/s transfer speeds. The 25GbE card will be more expensive, ditto all the network gear associated with getting it connected to your workstation.

10GbE is a pretty sweet spot right now re: price vs. performance. I went with optical transceivers and DACs for heat and power reasons. That will require some more reading, i.e. what types of transceivers and optical fiber to buy. Good luck.

2 Likes

How are you cooling this card ? This card needs proper airflow over the heatsink.

Edit: i just saw temperatur is 58 Degress C. Is that at idle or full load ?

Those HBA cards / chips sure run hot. The LSI 2016 soldered onto my motherboard expects a lot of airflow.

But the issue here is going to be the network card. No point building an all-flash pool with nothing more than a 1GbE network connection unless all the activity is local and you only need the network connection to steer it from afar. That’s why the local performance is OK but network performance is sub-par.

It’s like putting a modern Porsche engine on a 1950’s Porsche diesel tractor and wondering why the performance is sub-optimal. All aspects of the system (including the network it’s attached to) need to designed around the desired performance.

Based on postings here, a Realtek Ethernet controller, of any kind, appears to be asking for trouble. A great deal of sweat, hairs, and tears have allegedly been sacrificed to make them work at all. The main appeal to OEMs seems to be low cost, sort of like house flippers putting a “builder grade” dishwasher into a kitchen, ie ticking a box rather than putting something in that will serve the homeowners well.

The same issue applies here. A 1GbE network NIC from Realtek may work well enough for gamer use, etc. But for heavy-duty applications like NAS, I’d try and put in something with a better track record, like Intel NICs, that “just work”. See resources for a primer from @jgreco re: 10GbE Ethernet.

1 Like

So it is running in HBA mode (IT) and when under load the temp doesn’t seems to fluctuate from 58C

I agree that the 1Gbps NIC on the board is a huge bottle neck, but right now it doesn’t make sense for me to upgrade to 10Gbps

The transfer rates I listed are correct Gbps vs GB/s
from my workstation duplicating a 50G file on the SMB share will report as transferring as 1 Gigabyte / s since is a copy to itself it doesn’t use the network and is not the bottleneck

right now yes it can saturate a 1 Gbps NIC but I would expect internal transfers to be lightning fast

There is no way that you are getting more than about 120MB/s with a single 1GbE connection, especially over SMB.

When my transfer speeds are that low or lower with the 10GbE NIC plugged into my laptop, it’s a tell that the MacOS network config is once again not honoring the prioritization settings and ignoring the 10GbE connection and instead is dawdling via wireless. Then I turn off wireless and all is well again.

Anyhow, if you really want 1GB/s transfers between your NAS and your CPU, you need a 10GbE or faster network connection.