Added SSD's - everything is slower

I have 2x ML310e Generation 8 servers of very similar configuration.
Xeon(R) CPU E3-1220 V2, 32GB RAM, 2x22TB WD RED NAS drives, 10Gbe.

Running BlackMagic disk speed test, I was seeing 300-400MB/s writes and ~700MB/s reads on an SMB share.

I added 2x old Samsung 850 Pro SSD’s in a new pool, and created a new share. I’m only seeing ~200MB/s reads and ~800MB/s reads. Watching “iostat -x”, I see the SSD’s are maxed at 100% utilization. I thought maybe I had a bad SSD or something, so I pulled one of the drives in the SSD mirror.

I added 2x new WD Red NAS 4TB SSD’s to the other ML310e and i’m seeing basically the same performance. ~200MB/s writes and 100% utilization.

I just noticed some of my drives are now at 3.0 Gb/s. Pretty sure they were all 6.0Gb/s before. Not sure if it’s a result of adding drives or some other issue.

truenas1

sda WDC WD221KFGX-68B9KN0 SATA 3.5, 6.0 Gb/s (current: 6.0 Gb/s)
sdb WD Red SA500 2.5 4TB SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
sdc WDC WD221KFGX-68B9KN0 SATA 3.5, 6.0 Gb/s (current: 3.0 Gb/s)
sdd WD Red SA500 2.5 4TB SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)

truenas2

sda WDC WD221KFGX-68B9KN0 SATA 3.5, 6.0 Gb/s (current: 3.0 Gb/s)
sdb WDC WD221KFGX-68B9KN0 SATA 3.5, 6.0 Gb/s (current: 6.0 Gb/s)
sdc Samsung SSD 850 PRO 512GB SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)

Re-running the BlackMagic speed tests on the original 22TB drive mirrors, they are also down to ~200MB/s writes.

Any idea what might be going on here?

The hardware is old enough to possibly have a mix of SATA 2 and 3 ports.
These big new HDDs are obviously working hard to provide as much throughput as possible while the older 850 SSDs may not be capable of saturating a 6 Gbps link.
And you might be hitting a limit with the link between CPU and chipset.

2 Likes

How are your drives attached to the MB? MB SATA ports or via a HBA?

And what O/S are you running the BlackMagic test under?

Thank you for taking the time to reply.

They are supposed to all be SATA 3 / 6Gb/s ports.
https://www.hpe.com/psnow/doc/c04123183.pdf?jumpid=in_lit-psnow-getpdf

It is curious though, that with 4 hot-swap drives installed, 2 of them are 6Gb and 2 are 3Gb, although the specific hot-swap drive bays that are at 6Gb -vs- 3Gb is different between the 2 “identical” systems.

TrueNAS2
sda WDC WD221KFGX-68B9KN0 2GKR4N3S SATA 3.5, 6.0 Gb/s (current: 3.0 Gb/s)
sdb WDC WD221KFGX-68B9KN0 2GK36PUS SATA 3.5, 6.0 Gb/s (current: 6.0 Gb/s)
sdc Samsung SSD 850 PRO 512GB S1SXNSAFB02588H SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
sde Samsung SSD 850 PRO 512GB S1SXNSAFB02588H SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
TrueNAS1
sda WDC WD221KFGX-68B9KN0 2TG03KLP SATA 3.5, 6.0 Gb/s (current: 6.0 Gb/s)
sdb WD Red SA500 2.5 4TB 2423054A0A03 SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
sdc WDC WD221KFGX-68B9KN0 2GG0A16L SATA 3.5, 6.0 Gb/s (current: 3.0 Gb/s)
sdd WD Red SA500 2.5 4TB 2423054A0X09 SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)

I was thinking maybe an IRQ conflict, as there are only a few IRQ’s available and they are shared across multiple devices.

Hello Protopia,

Thank you for responding.

I’m using the built-in SATA controller on the motherboard, connected to a 4-bay LFF drive cage.

https://support.hpe.com/hpesc/public/docDisplay?docId=c03801789&docLocale=en_US

I would not be opposed to purchasing a different HBA if there were a way to cable it to the drive bay.

Running BlackMagic on a 2022 Mac Studio M1 Ultra. 10Gb.

I concur that the issue is likely the onboard SATA interfaces which are holding you back.

It says its MiniSAS. Can you confirm the backplane as an 8087 connector

They look like this:

If so you can purchase a better HBA like this one:

But you may need a longer cable like this to reach the PCI-E slot, I’m not sure.
SAS cable SFF-8087 to SFF-8087 27-inch 0953652-02 | eBay

Other sellers will have these items for sale for less, but the gentleman over at TheArtOfServer won’t sell you fake chinese knockoffs that claim they are OEM.

1 Like

Hello NickF1227,

My initial thought was maybe the read / write speed reported by the HDD pool was being reduced because TrueNAS was allocating RAM differently now with 2 different pools?

When I get a chance, I’ll pull the 2x Samsung 850 Pro SSD’s out and see if it makes any difference.

I ran the BlackMagic disk speed tests on the HDD’s not terribly long ago, but there was a little less data on the drives.

I suppose I could have crossed some usage threshold where the HDD’s just aren’t able to read / write as fast as they did earlier.

For the life of me though, I just can’t come up with an explanation as to why these SSD’s are benchmarking so slow. The 850 Pro’s were zippy a few years ago in a direct-attached RAID.

I’ll check out that SAS HBA. I’m not opposed to putting some money into these old boxes, but I’m also aware there may come a time when I just need to let them go.

I bought the ML310e’s new in 2013…

With SSDs in the mix, a 9300 may be more indicated than a 9200.
Anyway, updating the NAS itself is probably the best idea.

Hello @NickF1227,

I went ahead and ordered the LSI 9211-8i 27-inch mini-SAS cable from The Art of Server on eBay.

Best regards,

The Genuine LSI 6Gbps SAS HBA LSI 9211-8i (=9201-8i) P20 IT Mode ZFS FreeNAS unRAID dropped right in. I was able to simply move the mini-SAS cable from the MB to the PCIe card.

Everything just worked, and it appears my NAS performance is back to approximately what it was before I added the SSD’s (apparently I didn’t keep the best notes of every test in every configuration when I first set these up).

I still don’t quite understand why simply installing the SSD’s appears to have overwhelmed the onboard controller. Maybe it reconfigures itself based on the number of drives physically attached?

I’m going to either order another identical SAS HBA for my other ML310e V2, or does anyone have any other suggestions?
Currently 2x HDD’s and 2x SSD’s mirrored in each ML310e V2, but I’ll likely re-configure everything at some point and perhaps update to something with more drive bays in the future. Maybe 4x SSD’s in 1 and 4x HDD’s in the other, etc.

Thank you very much for the help so far.

1 Like

Mmh, not sure how this works, but it say here Sata port 0/1 with (6Gb) and port 2/3, (3Gb), but the mainboard has only one Mini SAS and two Sata connectors?

grafik

The drives aren’t connected to the onboard SATA ports.

The MB has a built-in HPE Dynamic Smart Array B120i Controller (PCIe 2.0 x4) with a single mini-SAS cable that plugs into the 4x LFF drive bay.

I installed the LSI 9211-8i SAS HBA (PCIe 2.0 x8) and just moved the mini-SAS cable from the MB to the PCIe card.

My thinking is maybe that embedded HPE Dynamic Smart Array B120i Controller behavior is different when there are 2 SATA drives connected -vs- 4 SATA drives connected?

As I recall, when I first set up the NAS1 with 10Gbit and 2x 22TB drives mirrored (and very little data on the NAS), I was seeing ~350MB/s+ writes with BlackMagic disk speed test running on a 2021 Mac Studio M1 Ultra.

When I installed the Samsung 850 Pro’s in NAS2, I noticed 1/2 of the drives are now negotiating 3.0Gb/s instead of 6.0Gb/s, and network writes to the SSD’s were ~216MB/s. Network writes to the HDD’s were also ~227MB/s (down from ~350MB/s) – but the HDD’s are closer to 1/2 full now -vs- when I benchmarked the NAS empty.

Today when I run the BlackMagic Benchmark against the empty 850 Pro SSD’s still on the B120i (NAS2), I’m seeing around 81MB/s writes. There’s something very wrong there.

With the new (to me) LSI 9211-8i SAS HBA on NAS1, I’m seeing ~253MB/s writes on the HDD mirror (46% full) and ~370MB/s writes on the WD Red SA500 NAS SATA SSD - 4TB mirror (NAS1).

I was expecting closer to 500MB/s writes on the SSD’s. I wonder if PCIe 2.0 on the LSI HBA is a factor?

Use a LSI 3008 controller, or at least a 2308, with SSDs. The 2008 controller is fine for HDDs but too old for SSDs.

In case of the HP Gen8 Microserver with would pobably be a yes.

That controller is not a real controller but relies on special drivers installed in the OS to offer raid features. The recommendation is to disable this for ZFS (or better use a HBA). In case of the Gen8 Microserver this means using AHCI-Mode, then each of the 4 drives on the miniSAS connecter is presented as a normal SATA drive. Als the Intel Chipset C204 there only allows two 6Gb/s and four 3Gb/s, I guess not all of them will run in 6Gb/s…

1 Like

I ordered a Genuine LSI 9207-8i 6Gbps SAS PCIe 3.0 HBA P20 IT Mode ZFS UNRAID which is “based on the LSI SAS2308 SAS controller chipset” for the other ML310e Gen8.

Best regards,

I figured out that if I disable “AVB/EAV mode”, I get much better numbers on the 2x 4TB SSD mirror on the new (to me) LSI 9211-8i SAS HBA (NAS1).

image

image

Hmmm.

There might be an Apple software bug at play

1 Like

The controller (B120i) in these machines is the same as that in the Gen 8 Microserver (which I have a lot of experience with). I can confirm that ports 1 & 2 offer 6Gbs, while 3 & 4 only do 3Gb/s.

As per other recommendations: get a proper HBA and hook it up directly to the drive cage. If you want, you can get a SATA spider cable, hook it up to the onboard B120i controller, and still use the first two ports at full speed (again, what most Microserver owners do: 4x 3.5 HDD in the drive cage on an HBA, then a couple of SSDs for caching on the the first two ports of the B120i).

Hmm…What is your network topology? What switches/network cards are involved here?