I don’t know why you’d want to saturate your network. As a general rule, that will just trigger congestion control and slow you down anyway. I’d think of it differently: how can you optimize your network usage?
If you go that route, you could apply VLANs, QoS settings, DSCP values, IPv6 aliases, and jumbo frames to push high volumes of traffic through your network without creating needless congestion for other applications or services. That seems like a better way to go.
On the other hand, if your question is just “how much data can a given array read/write to the network at a time” then your backplanes and striping schemes will have an impact and you’ll need to do some capacity planning. SATA-III maxes out at 6 Gbps per drive, and SAS at 12 Gbps, so assuming the backplane(s) on your “spinning rust” are SATA-III and configured with a 6-wide stripe, back-of-the-envelope math would suggest your max throughput for sequential writes to the network would be <= 36 Gbps, minus all sorts of things like disk seek times, mirrored reads, cache misses, parity and checksum verification times, metadata lookups, and packet encoding and fragmentation. Read caches and metadata vdevs may improve the apparent times on some things, but those activities still have to be done, and your cache drives are presumably SATA-III as well and may limit your data transfer to the max speed of your caches; I don’t know enough about the caching internals to say for sure what happens in that particular use case. If your cache drives have M.2 or PCIe connections they may be faster than your aggregated stripe speeds, but the cache misses for reads and lookups that aren’t already local would still be limited to the processing time required to retrieve that information from the HDDs.
The tl;dr is that with the right backplanes it seems like you could saturate a 10 or 20 Gbps link with highly sequential, cache-optimized reads/writes even with SATA-III HDDs, but in practice I’d be surprised if you did with non-sequential reads/writes and varied usage spread across server-side files and services–and assuming non-abusive clients, of course.
Your best bet would be to ask your Dell contact what the max throughput is of your system’s backplane(s), and talk to your network admin about how congestion control and QoS will impact the TrueNAS systems and vice versa. Assuming you can do it, pushing 20 Gbps (including packet overhead, acks, IP control packets, etc.) through typical switched Ethernet fabric can result in dropped packets, queue overflows, large data retransmissions, latency-sensitive application timeouts, potential denial of service for additional inbound connections to the TrueNAS servers, and other problems. So, even if you can do it, I’d treat maxing out your full TrueNAS Ethernet bandwidth as something to avoid rather than something to strive for. As always, your mileage may vary.