ZFS disk performance test help,3 mirror vdev gets 129MB/s with fio

I am running proxmox 7.4-19, hosting truenas core and passing through a PCIe card to connect SATA drives. I am using fio to test speeds, and I don’t think I’m getting as fast as I should. I have a Broadcom LSI 9305-16i card installed and passed through directly to truenas. Any tips on what I should be doing differently? One obvious thing is that I have 2 vdevs that are 8TB and one that is 16TB. The 16TB has more space and so is getting more of the writes, but it still seems that it should be faster?

root@mynas[~]# zpool status
  pool: <pool redacted>
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 1 days 08:02:07 with 0 errors on Thu Jan 30 10:32:15 2025
config:

        NAME                                            STATE     READ WRITE CKSUM
        <pool redacted>                                         ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0
            <gptid redacted>                            ONLINE       0     0     0

errors: No known data errors
fio --ramp_time=5 --gtod_reduce=1  --numjobs=1 --bs=1M --size=100G --runtime=60s --readwrite=write --name=testfile
	testfile: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
	fio-3.28
	Starting 1 process
	testfile: Laying out IO file (1 file / 102400MiB)
	Jobs: 1 (f=1): [W(1)][100.0%][w=124MiB/s][w=124 IOPS][eta 00m:00s]
	testfile: (groupid=0, jobs=1): err= 0: pid=1552: Mon Feb 17 17:08:12 2025
	  write: IOPS=122, BW=123MiB/s (129MB/s)(7378MiB/60190msec); 0 zone resets
	   bw (  KiB/s): min=62735, max=132517, per=100.00%, avg=125632.78, stdev=9235.20, samples=118
	   iops        : min=   61, max=  129, avg=122.19, stdev= 8.99, samples=118
	  cpu          : usr=0.17%, sys=0.68%, ctx=7405, majf=0, minf=1
	  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
	     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
	     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
	     issued rwts: total=0,7378,0,0 short=0,0,0,0 dropped=0,0,0,0
	     latency   : target=0, window=0, percentile=100.00%, depth=1
	
	Run status group 0 (all jobs):
  WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=7378MiB (7736MB), run=60190-60190msec

zpool iostat -v 1
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth 
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Pool_D1                                         16.2T  12.9T      0    253      0   253M
  mirror-0                                      4.90T  2.37T      0     62      0  62.6M
    <REDACTED>                                  -      -      0     31      0  31.8M
    <REDACTED>                                  -      -      0     30      0  30.8M
  mirror-1                                      5.01T  2.26T      0     63      0  63.6M
    <REDACTED>                                  -      -      0     31      0  31.8M
    <REDACTED>                                  -      -      0     31      0  31.8M
  mirror-2                                      6.32T  8.23T      0    127      0   127M
    <REDACTED>                                  -      -      0     63      0  63.6M
    <REDACTED>                                  -      -      0     63      0  63.6M

Sorry - I’m not sure if I can edit my post, but I am runing TrueNAS Core

midclt call system.info
{"version": "TrueNAS-13.0-U6.7"

Another way to phrase this question might be - is there a wiki or set of operations I can work through to understand why I am getting the performance I am getting and how I might improve? I would love a guide to debug perf on my vdev. Thanks!

Maybe start here, if you have not found this already…