In many ways that diagram is misleading/incorrect.
SATA is 6gbps per “port”, ie drive.
SAS is 12 gbps per “port”
NVMe uses PCIe, and multiple lanes per drive… but in practise is limited to x4
The point is the above diagram should be reading 16GB/s not 16Gbps. Its 8x faster.
PCIe3 is 8GT/s, or about 1GB/s (Byte not Bit) PER LANE.
So, a PCIe3x4 NVME will do up to 4GB/s, and PCIe4 is twice as fast… and PCIe5 is twice as again… and PCIe6 is twice as fast again.
Ergo, The max speed you will get out a the 16x PCIe3 slot on the X10SRi is 16GB/s
Whether you’re going to 4x NVMe drives in a bifurcation card… or if you’re goingf to a 8 or 16i SAS 12gbps HBA.
24gbps SAS is actually a thing… but with PCIe3 gen technology you’re really talking 12gbps SAS.
ANYWAY, you can attach 16 drives directly to a SAS HBA which is in the x16 slot.
That gets you 16GB/s of bandwidth to the CPU… and 16x 12gbps to the disks,
12gbps is about 1.5GB/s, thus a max to the disks speed of 24GB/s… but the bottleneck is 16GB/s
Using expanders you could connect to 24 SAS SSDs (or more), but the most you will see is 16GB/s out of that slot.
Seems to me, if you like the idea of using SAS SSDs, then up to 16 12G SAS SSDs connected to a 16i 12G SAS card in the 16x slot would be a good idea.
And you can get chassis that supports that too.
BUT
I don’t think the CPU is going to keep up with 16GB/s anyway… and 10gbps networking is only 1GB/s…
Thus you’d start heading toward 25gbe or better, which would mean that you’d be able to saturate 10gbe to at least 2 editors simultaneously (assuming the right switching hardware)
Thus, if capacity is important… you don’t actually need NVMe.