I run 24.10 SCALE with mirror of 2 NVMe drives (no caches).
I have 25G Mellanox ConnectX-4 card passed through to TrueNAS from Proxmox.
I enable SMB with Multichannel and configure Windows 11 (client) with Multichannel support.
But it doesn’t work. No matter what I try, I only see one connection in Get-SmbMultichannelConnection or Get-SmbMultichannelConnection | ConvertTo-Json
smbstatus also shows only 1 connection on TrueNAS side.
I’m pretty sure my client is fine, as SMB multichannel works against Ubuntu based NAS (TerraMaster), I do see multiple connections in above commands.
I was finally get SMB multichannel working with TrueNAS Scale by setting custom smb_options using the TrueNAS cli.
In my case I’m running a dual port 10 Gbps NIC with both ports connected to the switch bound together and configured with LACP. In the TrueNAS SMB service advanced options: I enabled the checkbox for SMB multichannel. I also bound the SMB service to just a single IP address using the “Bind IP Addresses” option. Then, from the TrueNAS cli I set the following smb_options:
service smb update smb_options=“interfaces="192.168.1.3;speed=20000000000,capability=RSS"”
(Where 192.168.1.3 is the static IP I have the SMB service bound to and speed=20000000000 is the combined speed of my bonded interface). In your case with a single 25 Gbps NIC you would use
speed=25000000000 instead.
Also see the discussion in the following forum post for which also addresses using the smb_options to get SMB multichannel working:
As noted in that post, despite what the TrueNAS documentation states, it is possible to get SMB multichannel working with just a single network interface but requires that your NIC supports RSS and that you configure TrueNAS scale using the appropriate smb_options.
Yes, when I was running Core 13 over 10Gb fiber, I did some tuning but that had to be removed from the interface during migration to Fangtooth. It was assigned to the interface in the SMB advanced/aux options I think, followed an article and it worked, but like @e7balt said, you basically force it to know your max speed along with enforcing SMB3, the other 2 can’t do multichannel so I turned those off anyway. Pretty sure this nic has a lot of tricks up its sleeve, Mellanox Connectx 4 pro dual interface
I am confused here. The whole point of SMB multichannel, as I understand it, is to increase fault tollerance…ie I had a bad cable… and also to increase throughput. You definitely do not get the fault tollerance from a single interface, and I haven’t seen much in the way of evidence to suggest that you’d actually get performance benefit in doing this. So yes, you can do it, I’m just not sure why you would want to?
Also worth nothing, in my previous testing RSS actually made performance worse.
Changing RSS settings may have some performance to unlock too, but in my testing so far all of the variations I have tried resulted in performance degradation.
Yeah, definitely a big benefit of using SMB multichannel is improved fault tolerance.
In my particular case I’m using a dual 10 Gbps port NIC on my TrueNAS box with both ports bonded using LACP. Very happy with performance as I have no problem achieving full 10 Gbps throughput between my TrueNAS box and my workstation (which is just on a single 10Gbit NIC) in my home network environment.
However, I do think in some scenarios leveraging SMB multichannel even with a single NIC can provide some additional performance. Microsoft provides the following example as how even with a single network adapter it is possible to see improvements in some workloads:
Single RSS-capable network adapter
In this typical configuration, an SMB client and an SMB server are configured by using a single 10-gigabit Ethernet (10 GbE) network adapter. When SMB is deployed without SMB Multichannel, and if there is only one SMB session established, SMB creates a single TCP/IP connection. With only a single CPU core, this configuration inherently leads to congestion, especially when many small I/O operations are performed. Therefore, the potential for a performance bottleneck is significant.
Most current network adapters offer a capability called Receive Side Scaling (RSS), which enables multiple connections to automatically spread across multiple CPU cores. However, if you use a single connection, RSS cannot help. When you use SMB Multichannel with a RSS-capable network adapter, SMB creates multiple TCP/IP connections for that particular session. This configuration avoids a potential bottleneck on a single CPU core if many small I/O operations are required.
Also, thanks for mentioning tuning the client side settings- tuning those may definitely help some users achieve better performance.
At some point I’ll have to try to run benchmarks to try to gauge the impact of SMB multichannel being on or off as well as the impact of adjusting some of the other settings you mention. I suspect some workloads may benefit more than others (as Microsoft states, SMB multichannel has the potential to help avoid single CPU core bottlenecks in scenarios with multiple small I/O operations). So that would be interesting to test out.
Right, so you are doing load balancing at layer 2 and layer 3 at the same time. A side effect of this can be wasted traffic (extra overhead) and out of order packets.
I have no doubt it works fine for your use case. But scalability here is a problem, and it isn’t the “right” way to do it. Thats probably why this usecase isn’t covered in our docs page linked earler.