I have a link aggregation with unmanaged switch 10G Ethernet. Since the switch is not supporting any protocol , i used the loadbalance for the 2X10GbE.
For technical reasons that i’m still troubleshooting, the truenas core server downgrade the connectivity of ix0 to 100baseTX while ix1 is still on 10Gbase-T.
As normally expected, the total capacity should have been 10.1Gbps , since the first interface is now 0.1Gbps and the second 10Gbps. But this is not the case. The total bandwidth of the aggregated link is only 10MB/s, and the system is not using the second interface to balance the full traffic.
I’m not 100% certain but pretty sure you can’t do this with an unmanaged switch as you need to be able to set the switch ports to aggregating mode (that’s what it’s called on my Ubiquiti hardware) in order for link aggregation to function. My guess is that the switch is blocking the faster port as it’s seeing it as a loop/error/duplicate and whatever basic protection functions the unmanaged switch has are taking effect.
What Lagg Protocol did you select when you created the aggregation?
Hello @Spunky17 , i used the LOADBALANCE option, since is the only option that doesn’t need a managed switch.
@pmh , thanks for your comment. i just can’t understand why other distros can aggregate 2 ethernet interfaces with loadbalance protocol and also add up together the interfaces speed. (for example xcp-ng). I just didn’t expect that. can you suggest a managed switch for my case ? or should i go direct cat8.1 ethernet from cards to cards ?
Yep. But there are exceptions to this rule. MS servers for example have some kind of link aggregation that does not need any hardware support like LACP.
What is your use case?
Just a small heads up, link aggregation does not mean that one client gets the combined speed. It can, but it isn’t a given. So you don’t get 20Gbit/s for a single client just because you bundle two 10Gbit/s together.
Hello @Sara ,
my use case is back-end storage for virtualization host with NFS.
The virtualization host (xcp-ng) is able to aggregate the 2X10Gbps into 20Gbps, and also clearly adding that to the GUI.
@pmh hello Mr. Patrick,
i see that is not working in practice. Then what’s the point of “naming” a link aggregation method if in reality is not acting right in terms of speed ?
Is working ok though for disconnected ports situations.
Link aggregation does provide the sum of the bandwidth of the individual ports - but not for a single TCP connection and regularly not even for multiple connections from a single host.
The packets are distributed over all links via a hash sum computed from the layer 3 (IP) and layer 2 (MAC) addresses.
So if you have a company network with 100 workstations and server with a 4 port LACP link, of course all 4 ports can be saturated. This is what link aggregation is designed for. Nobody had home networks in mind.