Data transfer speed over 10Gb NIC to nvme raid 0 vs sata ssd raid 0

Hello everyone, wish everyone well and great community.

I have a question about the data transfer speed to nvme raid 0 and sata ssd raid 0 running the latest version of TrueNAS over 10Gb NIC. please allowing me to share my hardware setup for information only.

HP Z8 G4 dual Xeon cpu, 128Gb RAM, 10Gb NIC, nvme RAID 0 using HighPoint SSD7540 using pci-e gen 3 with 8 of 4Tb nvme. LSI 9399-16i set as RAID 0 using pci-e gen 3 with 12 of 4Tb sata SSD.

When copy data from windows 11 to nvme RAID 0, achieving nearly 10Gb NIC bandwidth at 1Gb+/- constant. my 128Gb RAM used 110Gb as cache, the highest data rate from any system over the same hardware. but when copying the same data to sata ssd RAID 0, I am getting between 300Mb to 1Gb true speed, mostly at 500Mb.

can anyone point out as to why sata RAID 0 of 12 sata SSD using pci-e gen 3 which is 12Gb running so much slower? any advise is appreciated. Thank you all for taking the time looking over my first post. Happy Friday
FYI. I am using HP Z840 dual xeon with 10Gb NIC 7 x 14Tb HDD set RAID 0 to back up my ssd system. here seeking the fastest data transfer rate.

1 Like

Purely a guess would be the caching on the sdds vs the nvmes - just to confirm, does what does ipref show as the link between systems?

Also, damn, that is a lot of data with no redundancy :open_mouth: If you don’t mind me asking, what is the system for? First time I’m seeing TrueNAS used for raw speed.

You could also use @NickF1227 ā€˜s utility to test the raw performance of the pools, should at least help narrow down the bottle neck:

This is a RAID controller and therefore unsuitable for ZFS. The Highpoint Rocket 1508 is the HBA version using only a PLX switch which is safe for ZFS.

I have not found any details on the LSI 9399-16i. But all LSI HBAs need active cooling. They will overheat otherwise and throttle down.

Also what kind of SSDs are you using ? Cheap ones employing pseudo SLC cache are usually bad with sustained writes.

2 Likes

Thanks, Farout. great points, appreciate you.

the nvme RAID card is SSD7540, zfs compatible. I am getting the best transfer speed.
the sata raid is LSI 9300-16i, typo on my part in my post. apologies. Thank you for two points, active cooling for the card, and cheap sata SSDs I am using.
Additional piece of information: when I was running windows 11 on the same hardware, the nvme RAID 0 was performing like, well, slower than HDD. that’s reason why I went to TrueNAS where nvme RAID 0 performed exactly what I was looking for. Do keep in mind, this HP Z8 G4 has more fans than normal pc, cpu/memory has its own compartment from expansion slots.

Any RAID card is NOT suitable. Not because it doesnt give you speed, but because ZFS needs direct disk access. It can run for a year without problems, and then suddenly you wake up to an offline pool with I/O errors and no possibility to recover any data.

It has to be a HBA in IT mode or with NVMes a PLX switch or a simple bifurcation card.

The 9300-16 - I have one - gets extremely hot and needs a fan strapped to it if not deployed in a rack server with screaming fans. A well ventilated case is often not enough.

Also check if the 9300 is in IT Mode
sas2flash -listall or
sas3flash -listall will tell you that.

1 Like

LSI HBA card documentation states below. Workstations don’t normally have the same cooling as rack servers. See Minimum airflow

Thermal and Atmospheric Limits
The atmospheric limits for the LSI 12Gb/s SAS HBA are as follows:
* Temperature range: 0 °C to 55 °C (32 °F to 131 °F) (dry bulb)
* Relative humidity range: 5% to 90% noncondensing
* Maximum dew point temperature: 32 °C (89.6 °F)
* Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature
The following limits define the storage and transit environment for the LSI 12Gb/s SAS HBA:
* Temperature range: –45 °C to +105 °C (–49 °F to +221 °F) (dry bulb)
* Relative humidity range: 5% to 90% noncondensing

ZFS terminology is different because what ZFS does under the hood is different. While ZFS supports striping across individual storage devices, (NVMe or SATA SSDs), it is not the same as RAID-0. Thus, ZFS uses the term ā€œStripeā€ when describing such a vDev device, or group of devices.

ZFS Striping is individually written group of storage sectors to a single storage device. The next selected storage device is not guaranteed to be the next one. ZFS will select the next Stripe storage device based on most free space. This is to balance the space across all vDevs.

Another reason for irregular writing of Stripe vDevs is that ZFS considers Meta Data, both standard, (like directory entries or extended attributes), and critical, (like free block lists and top of the Merkle Tree), as more important than regular data blocks. Thus, ZFS by default keeps 2 copies of standard Meta Data and at least 3 copies of critical Meta Data.

These extra copies of Meta Data are by default written to different vDevs, (if possible), than other copies. Thus, for standard Meta Data, they would be written to 2 different Stripe vDevs, in this case. And for critical Meta Data, 3 different Stripe vDevs.

So, ZFS Striping is NOT RAID-0. Nor similar to the way RAID-Zx striping works.

Perhaps it would be nice to have a RAID-Z0, without any parity, to accomplish very similar RAID-0. However, without the ability to add parity to a RAID-Z0 vDev, such a vDev type would have limited appeal. (And of course, if you could not expand the vDev like normal RAID-Z1/2/3, that too would limit the appeal.)

1 Like

thank you, Farout, you are a true expert, giving me the advise that I am seeking.

not arguing but adding additional information, the LSI9300-16i is in IT mode, both HighPoint and LSI are working in HBA mode as far as I know. will run the commands to verify and update.

thanks Farout again for your expertly advise.

Im not really an expert. If I post something wrong, I hope the true experts will correct me.
But HBA Mode or JBOD Mode is not what you need. You need all RAID functions completely removed from the firmware. This is what the IT firmware is for.

Certain RAID cards can be crossflashed to IT mode. But I dont know if such a firmware exists for the Highpoint RAID adapter you chose.

Please post a detailed hardware listing for your TrueNAS system along with how you have the pools set up. Your first post is not clear on what systems you are transferring data to or from or what hardware is in each. You list two systems?
HP Z8 G4 dual Xeon cpu, 128Gb RAM, 10Gb NIC, nvme RAID 0 using HighPoint SSD7540 using pci-e gen 3 with 8 of 4Tb nvme. LSI 9399-16i set as RAID 0 using pci-e gen 3 with 12 of 4Tb sata SSD
&
FYI. I am using HP Z840 dual xeon with 10Gb NIC 7 x 14Tb HDD set RAID 0 to back up my ssd system. here seeking the fastest data transfer rate.

The HighPoint SSD7540 is not fit for use with TrueNAS (ZFS). I browsed the website and it has drivers to install on Linux. TrueNAS is more of an appliance model where the drivers should be built in already and I don’t think it is distributed with those drivers. If you are installing using Developer mode, that is putting TrueNAS in the unsupported category and it will break upon updating TrueNAS versions.

https://www.highpoint-tech.com/ssd7000-gen4-products-documentation

thank you, SmallBarky. appreciate you.

The Z8 G4 is mixed with nvme Strip and sata SSD Strip

The Z840 is a windows 11 box with 7 x 14Tb HDD used as backup, at the moment, its the storage

the 2nd Z840 has 7 x 16Tb HDD, that’s used to backup the other Z840. all 3 systems with dual Xeon cpus, 120Gb ecc memory, all with windows 11 OS at first.

since I have the extra hardware at the moment and I am not getting fast enough data transfer between the 3 workstations, I start using different OS on the all ssd hardware which is the hp Z8 G4.

running windows 11 the SSD7540 gave transfer rate slower than the all HDD Z840, where 350-550Mb per second actual speed is dependable. the all SSD system which suppose to be the main storage unit is running too slow. I ran Debian the transfer speed is nothing to be surprised about, tried other media OS and with similar results. there is much talk about TrueNAS, tried it, surprisingly the nvme Strip is using up the full 10Gb NIC band width, the main memory was used as caching, at all times. sadly, the original HighPoint sata raid card is not TI supported, the 12 sata ssd is not even showing. waited for a few days, received the LSI 9300-16i, plug it in, TrueNAS os recognized it, all 12 sata ssd showed up, created 2nd pool as Strip, start copying data from windows 11 box to the sata ssd pool, the transfer rate fluctuated between 300mb - 1.0gb per second although most the time it stayed at 300-400mb per second speed. that’s when I posted. the performance of the sata ssd Strip is slower than the 7 x 14Tb HDD raid 0 configured via build in LSI controller on the mainboard.

wonder if I should installed TrueNAS fresh since I installed the LSI9300-16i after I got the system going. any advise is appreciated, thank you all, have a wonderful weekend.

Thank you very much, Arwen, appreciate your expertly knowledge and sharing on ZFS. I wonder that’s why the transfer rate dropped to 300-400Mb per second that’s single sata ssd speed about.

the HP dual cpu workstations has its appeal over the years, more standard pic-e slots, 7 build-in bays for 3.5ā€ HDDs, I use its onboard LSI raid controller where in RAID 0, it writes data to all drives at the same time to gain write/read speed that’s only limited to bus speed or NIC speed. with 10Gb NIC so inexpensive setup as RAID 0 for speed made sense for me, where I can use 2nd box to backup the 1st box entirely, both are RAID 0 but mirrored manually if you will.

when I start using nvme ssd raid hoping to achieve better sustainable transfer rate, nothing but disappointment, that’s under windows 10 then 11 OS. windows OS handles HDD caching nicely, but not with nvme ssd or sata ssd. the expandability of TrueNAS storage pool is a huge, adding drives to the existing pool is easy without redo the entire pool, although in my case my HDD has copies of the same data but saving time all copying everything after expanding the existing pool is very nice.

Anyways, all I am looking for is sustainable data transfer rate to sata ssd Strip pool. If anyone thinks I should reinstall TrueNAS fresh again please let me know. Thanks again Arwen

I doubt it’d help, but if you backup your config, a clean install & re-import should take like ~5 minutes. It is pretty painless.

As I wrote above, sustained high write speeds to SSDs and NVMes often depends on the type of device used.

Consumer devices show ok speeds at the beginning, but when the DRAM cache or pseudo SLC cache is full, speeds drop dramatically to the native NAND write speed. Which can be as slow as 300-500 MB/s.

Also consumer devices often cannot sustain the same speed when the drive gets filled up above a certain treshhold.

If your goal is max write speed you have to get SSDs that can sustain that.

thank you, Farout. I agree and what changed my mind was that, while using windows 11 box, the nvme RAID 0 using HightPoint SSD7540 was performing exactly what you said, very disappointing. same hardware, once I put TrueNAS on it, the same nvme Strip was maxing out the 10Gb NIC, at constant speed of 1Gb+/-, this is 1/5th the time if copying from HDD RAID 0 with windows 11 to TrueNAS Strip over the same hardware. here is great read about SLC.

since my post can’t include link yet, Sabrent has a great write up about SLC or pSLC.

Good morning, Arwen, or evening. Hope you had a great weekend.

attached is the screen capture of 100Gb data size transferring over 10Gb NIC to sata ssd Strip I am getting 1Gb per sec actual speed, any larger data size will cause the slow down. Just FYI

I don’t see the screen capture. However, I am not an expert in performance analysis. So I probably could not do much with it anyway.