A rule of thumb is that a single vdev has the write speed of a single drive. So adding more RAIDZ3 vdevs will probably help, the largest gain from the first one you add, because that would double the speed.
All vdevs in a pool are striped. There is no option here. All vdevs should have the same topology, so you need to add another RAIDZ3.
That is factually incorrect. The number of write IOPS is the same per vDev, but the actual data throughput is based on the number of non-redundant drives in the pool.
The IOPS only becomes the limit when you are doing a lot of small reads or writes - otherwise it is throughput. And this is normally for virtual disks/zVolumes/iSCSI or database tables which do small 4KB reads and writes, and in this case what is even more important than using mirrors for IOPS is using mirrors to avoid read and write amplification.
So using 18x drives in 9x mirrored vDevs won’t have any more write throughput than using 12x drives in 1x RAIDZ3 vDev.
Expectations
@Lylat1an doesn’t give a lot of information about your pool layout except that it is RAIDZ3. A typical HDD has a peak write throughput of 150-250MB/s i.e. 1.2-2.0Gb/s though this includes writing metadata blocks for each TXG (which are not sent over the network) and doesn’t take into account the significant time taken for seeks. If we assume (optimistically) that you will get 100MB/s or 800MB/s throughput per drive, and that metadata write overhead is 10% then you will need 14+ data drives to be able to max out a 10GB network i.e. your RAIDZ3 pool would need to be 2x 10-wide RAIDZ3.
To achieve the same thing with mirrors and the same redundancy would require 14x 4-wide mirrors i.e. 56 drives!!!
Cause of actual results
Again, OP doesn’t give details of pool design or actual throughput, but throughput of a single drive sounds too low. If this is true then it sounds to me like he is doing synchronous writes i.e. a dataset configuration error.
Thank you for responding, my pool consists of 8x 4TB drives.
I kept an eye on a 700GB transfer yesterday, and it didn’t go much faster than 125mb/s
I was copying from a Samsung 990 Pro NVMe drive that’s dedicated to holding backups to be copied, and there’s another one in the server to receive them.
Now that I think about it, I tried sending the backup to the server’s SSD first, and got slightly less performance, perhaps my network settings are wrong…
Update: I tried copying the backup from the pool back to the SSD on my main PC, and I’m getting up to 173 MB/s. So perhaps the network is working properly, and the pool’s writes are the limiting factor.
It also depends in part on the config of pool. My puny pool with a single Z3 VDEV and a sVDEV manages to eek out 400MB/s on a pretty sustained basis when I’m transferring large files, at least according to the MacOS Activity Monitor measuring data throughput.
“Smaller” files or large collections of files (think rsync transferring sparse-bundle bands for Time Machine) will lower my network throughput.
Do not discount the positive impact that a sVDEV can have by pushing small files and metadata to the sVDEV (which usually consists of SSDs) while the larger files move to the HDD-based VDEV(s). This on a pretty standard 10GbE network without using jumbo packets, etc.