Throughput benchmark check (36 disks, RAIDZ1, 3 ZDEV, 9400-16i)

Hello, everyone.

Extreme TrueNAS Scale noob here.

Glad to be a part of this community.

Had a quick question.

Currently running

Epyc 7302P
128GB (8x 16GB) 2667Mhz
36x 10TB HGST ultrastar in RAIDZ1 (width 12, 3 ZDEV) just for performance testing inside SC847 with BPN-SAS3-846EL1 and BPN-SAS3-826EL1 backplanes.

HBA is Lenovo 430-16i (9400-16i fw in tri-mode) in PCIe 3.0 x8 slot. 2 miniSAS HD connected to each of the backplanes.

All the drives all recognized as 12Gbps in both the BIOS and storcli64.

No optional disk data sets were configured (dedup, logs, and etc).

I’m seeing about 2.6GB/s writes and 4+GB/s reads when i run

fio --ramp_time=5 --gtod_reduce=1 --numjobs=1 --bs=1M --size=100G --runtime=60s --readwrite=write --name=testfile

Is this in line with what’s expected? If not, what’s the bottleneck in my setup?

I was thinking that the drives should be able to saturate the SAS3 ports.

Front backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 24 drives = 48 Gbps

Rear backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 12 drives = 24 Gbps

total SAS3 link = 70 Gbps

PCIe 3.0 x8 = ~8 GB/s

But it seems like I’m getting less than half, and was just curious what I’m not taking into consideration.

Each drive reports around 250MB/s when tested individually.

8 drives out of 36 are 4k, and the rest are 512B.

Thanks, everyone, and hope to interact with you more down the road.