It is good to see these 100Gbe switches becoming more affordable.
BUT AFAIK… they don’t support RDMA/ ROCE. This requires lossless ethernet switching.
For simpler networks, we recommend NVMe/TCP. It works well over 100gbE regardless of switch and NICs.
We’re just completing performance testing of all the protocols over 100GbE and 400GbE networks using TrueNAS 25.10. Some observations so far:
For virtualization workloads (eg <32K I/O size) …. iSCSI is competitive.
For High Bandwidth workloads ( >1MB I/Osize)… NVMe/TCP and NVMe/RDMA are 20% better.
NVMe/RDMA is more complex to setup and maintain, but it is faster when reading from Cache/ARC. In this case, TCP overheads are significant. This is really an Enterprise or HPC use-case.
I’m curious about latency. My iSCSI “personal record” after tweaking every tunable, setting, knob, configuration, and profile I could get my hands on is about 21k IOPs Q1T1 4k random to/from ARC (small dataset to intentionally avoid the host’s disk subsystem). A more realistic real-world number was 18-19k. This is with TrueNAS Scale on the back-end and Windows Server 2022 at the front. QL41162 NICs; 10Gb copper and not using the NIC iSCSI offloads as it wasn’t any faster.
I was never able to beat this number with ConnectX-4Lx w/ AOC cables @ 25Gb. I’m curious if either NVMe/TCP or iSER have anything to offer this facet of storage networking.
Maybe so, but this is how I learn so that I can push things like this into the enterprise. And yeah, I know my homelab is probably way overkill, but if my baseline hardware can support those speeds and the low latencies, I’d like to squeeze out the best performance.
Also, in my case, and for reasons that are beyond the scope of this conversation, I really do need and want to use NFS instead of block level storage here. NFSoRDMA is what I really want.