I finally upgraded my Internet connection to fiber and was testing transfer speeds, and am hitting a very strange bottleneck when testing a download via wget/curl directly to my TrueNAS Scale 24.10 system where it tops out at around 88Kb/s (yes, Kb/s).
The system is cabled via 10GB SFP+ NIC to my managed switch, which is connected to the rest of the network and the router via 10GB fiber. Transfers to and from the server over SMB/SFTP are plenty fast, and docker containers running under Dockge on the system don’t seem to have any limitations. However, a direct download to the server via wget/curl from the command line tops out at 88Kb/sec. For reference, I’m testing with:
curl -O https://ash-speed.hetzner.com/10GB.bin
The performance on this same test meets bandwidth expectations on other systems on the network, but downloading to my TrueNAS Scale server just hits a weird low limit. I also ran iPerf from this machine to another server and didn’t see any limitations. No other transfers taking place on the system concurrently, so I’m not sure what’s the issue here.
I’ve reviewed this post and the 10-Gig Networking Primer, but nothing there seems to explain the slow curl speeds while other types of transfers to/from the same system seem to be happy and snappy.
Any ideas as to what I might poke at to remove the bottleneck?
Also noting that this download bottleneck seems to apply to pulling updates for Docker containers, but not to the applications running inside the containers. Stumped.
I can also confirm that downloading updates on the 24.10.x release is incredibly slow (200KB/s max).
This is from New Zealand but not experiencing anything like this on the rest of the network.
I’m also seeing TrueNAS updates coming down at about 680Kb/s on my 3gbps line. Is this perhaps a driver issue? The speed limits only seem to be curl from the TrueNAS command line downloading TrueNAS updates, pulling Docker images… all other instances running on the system seem uninhibited. FYI I’m running an Intel X540-T2 dual 10gbe NIC card with both ports running connected at 10gbe and a bridge created across both ports. Same issues with single or dual ports in the bridge, and with a 10Gtek SFP+ adapter card as well.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
10 10.0G 10 1102M 0 0 57.3M 0 0:02:58 0:00:19 0:02:39 55.8M
Link speed for the interface which is used for downloading was 1gig not 10 gig tho. And i am on 25.10.
I’ve been struggling with this as well. Any significant size container will time out during upgrades and installation because containers that are 2-3Gb are taking longer than 5 mins to download. I end up having to drop to shell and download directly into docker, then the upgrade or install will work since it is already in the local container store.
This also happens with system updates. I have to download the update manually outside of the webui, then perform a manual update.
Yet I have containers (apps) running which are regularly downloading from the internet at speeds up to 80MB/s (640mbps).
My system is running on an LACP trunk, 2 x Broadcom NetXtreme II BCM57800. This server can crank pretty respectable IOPs and throughput IF it isn’t the middleware conducting the work. Anything I try to do in the middleware is incredibly slow for this size system.
—–
TrueNAS Scale 25.04.2.4 (Core 13.3 upgraded to 24.10)
Dell PowerEdge R820
4 x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz
322GB ECC DDR3 1333MT/s RAM
2 x Broadcom NetXtreme II BCM57800 10Gbps in LACP L2+3 Hash policy, 9000 MTU
3 x LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03), 6Gbps
1 x LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05), 6Gbps
Currently 105 disks attached; mix of SATA SSDs and SAS 7.2k, 10k, and 15k RPM disks
System pool is on a 10 x Mirror, 3-wide
If you have a 9000 MTU on the interface you are using to connect to the internet, this may be part of your issue. You shouldn’t try to route jumbo frames through a firewall. Does the problem go away if you use a 1500 MTU?