The premise of all this is that there is a supported NIC and rdma-core in the system
iSER: This should be the simplest. You only need to enable the iSER function in the relevant components. You only need to add a switch in the webui to control whether to enable iSER.
NVMe-oF: The minimum support requires adding nvmet-rdma. NVMe-oF support is also missing in the webui, but it is very similar to iSCSI. (In the current market, NVMe-oF supports RoCE by default, so it is meaningless to talk about NVMe-oF that does not support RoCE)
SMB Direct: Obviously it depends on samba…
NFSoRDMA: This is similar to the iSER situation. Only one RDMA port needs to be added after the supported NIC and rdma-core exist in the system.
Still have an open FR for a similar request if anyone wants to track (the non existing) progress. Ideally a paying customer would say they have a use case for this;)
I’m interested in hearing real use-cases where there is real economic or application benefit of these technologies. Please describe them here or in the NAS ticket.
Bumping this, but for NVMe-oF target support, especially with RDMA. TrueNAS ships by default with the built-in kernel support.
Back on the old forums I yapped a bit about this, about how much more performance it gives over iSCSI. Other people eventually confirmed it, even with just the TCP transport vs iSCSI (w/o iSER).
At least one bigger storage appliance vendor offers target support. At least ESXi has initiator support for it, allowing faster IO for their VMs. Even Microsoft is working on their own initiator finally for Windows Server 2025.
So I’d say it’s a good time to mingle within this.
If you’re suggesting just used regular Ethernet, the reason I said homelab or SMB user is because it’s still expensive.
I can get connectx-3 pro, must be pro, to do RDMA and iSER over 40gb for $30 and a switch for under 200. When comparing that to 50gb Ethernet, much of it is higher.
I like to potential to be in micro instead of mili for latency.
The bandwidth numbers were not that amazing either… 3-4GB/s
We get about the same with standard iSCSI. We would assume we get more with iSER, but have not yet tested.
iSER is planned to start in Enterprise because of some reasons:
We need to fund our development and testing (not important to you, but critical for me)
We want to constrain the testing to specific cards and switches
We have no idea of how to support homelabs with random hardware/switches and RDMA (tcp/IP is more forgiving and diagnosable)
We don’t object to anyone following Level1Tech’s advice… but we can’t help fix issues. So, its only for users with strong skills and high desire.
There are two options for Feature Requests:
iSER for Community (provide reasons other than “I want to test”)
Is there a way for a technology person to get a “tester” Enterprise license?
I don’t think anyone would object if you provide features that are available but unsupported on community level - that is much better than not providing the functionality at all (for the community that is).
At some point I even would have paid to get Enterprise functionality (if that had been an option with non Enterprise hardware), and I still think there is demand for that.
Else there are lots of smart people in the community who can provide help and or guides on how to use these features even if you don’t support them directly.
It also would be totally fine (imho) to limit the out-of-the-box functionality to certain hardware.
Mellanox (Nvidia) makes running RoCE simple on the 100G switches (and newer I assume), its a single cli command.
iWarp does not even need Switches, just the server protocol enabled services
So just say you support
Chelsio T6 or higher
Mellanox CX4 or higher with MLX switches
Cx3 pro+ with older MLX switches (56G) could still use the server side services, but config would need to rely on community/vendor documentation
Wrt your two options, I think both have merit with the second probably being easier for you to handle (Enterprise license without support), I’d take either;)
As far as use cases go - I think everyone is hoping for a speed up of their network transfers, a way to bring the theoretical pool performance (SSD/NVME arrays) closer to the client, be that a Hypervisor or a beefy workstation (eg running video workloads)
It looks like they’re using the “Peak Performance” profile in CrystalDiskMark. It runs that RND4k latency test at QD32 and T16. Rubs me as a pointless corner case. “Let’s see how long it takes push water really really hard through clogged pipes”
On that particular test my [non-RDMA] iSCSI numbers are roughly similar to the above, FWIW.
A “normal” Q1T1 test on this same system (in µs):
Bottom line is, TrueNAS doesn’t just happen for free. Some features are very specifically Enterprise targeted towards our paying customers. iSER and related RDMA things are some items that nearly all typical home users aren’t going to be using in production and make sense to be reserved for our enterprise customers specifically. (If you do use them at home, you are pretty darn unique).
If there is enough interest for access to these features under some paid license option, we’re happy to gauge that interest and see if that makes sense under some program at a later date.