It’s pretty hard to tell what the CPU bottleneck throughput limit is unless you have access to some pretty fat pipes, a fast SSD pool, etc. On top of that, there is the question of what protocol and within each protocol what options you’re into. For example, encrypted SMB3 may or may not impose a significant burden on CPU performance, depending on the CPU era.
I’m somewhat convinced that the 1.7GHz default clock speed of my D-1537 is holding me back on 10GbE transfers as I can see the single-thread 100% bounce from core to core as a long transfer progresses. For all I know, that may also be an artifact of the TrueNAS GUI dashboard.
For SMB in a SOHO setting, I’d counsel looking into low-core-count, high-clock dies like the 8300GE Ryzen. It should blow the doors off the X10 generation and the biggest issue is that you have to build a board from scratch (with a x710-2AD and a 73xxx or whatever HBA) rather than luxuriating in a AIO option like the motherboard I bought.
Normally, I’ve been getting about 400MB/s if I’m using my QNAP 10GbE SFP+ Thunderbolt Adapter. A quick test right now (with a replication running in background) got me about 3Gbit/s with somewhat perplexing Dashboard results.
Rather than see consistent load / throughput as the server settled into getting 50+GB shoveled into it via SMB, the results keep pinging all over, as do the CPU loads. The above is a high, the low is in the single-digits.
Just file storage for now.
Smaller files just disappear, they copy so quickly. You have to use larger archives to see the impact of sustained writes.