Link aggregation setup successful, but not aggregating

I successfully setup link aggregation, tying together eno2 and eno4 into bond1. I tested it, saved the configuration, set it up on the Unifi switch, and everything is online.

However when I try multiple large file copies, the TrueNas dashboard shows all the traffic on one port, which saturates at 1gbps:

I can’t link to an image of the network dashboard, but eno2 has minimal traffic and eno4 has all of the traffic. The bond traffic looks exactly like eno4.

Is there something additional I need to do to tell TrueNas to balance the traffic between the ports?

Upon more research, I suspect this is a Unifi problem with a lack of load balancing/layer 4 support.

Disregard, but leaving for future internet travellers.

Link aggregation is only likely to benefit you when you’re using several client systems simultaneously–are you? See:

It could very well be that Unifi is causing a problem, but it’s often the case that LAGG doesn’t do what people expect.

1 Like

I’m trying to saturate the link by copying files from a TrueNas share to multiple clients simultaneously. My understanding is that if I’m copying 4 files from TrueNas to my Mac and another 4 files from TrueNas to my PC and another 4 from TrueNas to another server, all while streaming video from TrueNas to my TV, that the MAC address hashes should be different on at least one of those clients and I’ll see traffic move across both links, ideally resulting in a > 1gbps outbound transfer rate from TrueNas on the network stats.

If that’s not right, please let me know so I can adjust my expectations!

I’d expect that four client machines would be enough that at least one would go through the other link. I’m not certain that would be the case, mind you, but I think you’re reasonable in expecting it. The throughput, OTOH, may not be reasonable if your pool is using spinners. Yes, they can probably sustain a transfer rate greater than 1 Gbit/sec, but they’re also trying to do 13 things at once, and access times are definitely going to slow things down.

Appreciate the insight.
Here’s what I’m seeing. I wish I could post an image with the network traffic

  1. Start PC transfer by itself. It saturates the 1GB link on eno2.
  2. End PC transfer and start Mac transfer. It saturates the 1GB link on eno4.
  3. Start PC transfer while Mac is still transferring. The Mac bandwidth drops to constrain the total outbound to 1gbps on the bond. Each eno2 and eno4 sustain around 500mbps.
  4. Stop PC transfer and see the Mac transfer rate go back up to 1gbps.

It sure looks like the box refuses to cross the 1gbps rate on the bonded link.

The TrueNas server has 10x drives setup in 5 mirrors of two disks. I wouldn’t think IO would be the bottleneck here, but might be if the files happen to all be on the same mirror. But the limit seems to be right at 1gpbs which seems like a big coincidence.

Eureka!

I figured it out, and as always, the problem was between the keyboard and the chair.

I was, in fact, transferring from two clients that hashed differently and utilized different channels. The problem is that both of those machines are on a downstream switch that was connected to the main switch with a single 1gbps link.

I tested with two clients that are on different ports of the main switch, and instantly got 2gbps throughput.

Thanks again for the help!

2 Likes

Always hold a beverage with one hand when you sit down at the computer. This allows you to avoid responsibility. :nerd_face: