Would it be possible to get the FIO settings and methodology for these tests just to compare our own systems to see if we’re ballparking the same numbers?
Maybe @HoneyBadger can help with that.
The performance numbers in that document appear to be theoretical values all based on a ‘standard’ HDD and not actual measured performance. I have seen those performance equations before as relates to ZFS. The real world will not match At the very least ARC / L2ARC caching of reads and CPU limitations on compression speed (and just traversal through the ZFS code) will skew real world measurements.
In the past I have seen 4KiB or 32KiB blocks used for random I/Ops measurements and 1MiB blocks used for throughput. Once again note that the equations were for isolated read and write operations. In the real world there is (almost) always a mix of reads and writes and sequential and random. Exceptions are things like security camera feeds, which are large, sequential writes, with very occasional sequential read operations.
For my testing I tend towards a mix of 70% read / 30% write and 50% random / 50% sequential with a compromise block size of 32KiB. The result is a number, in either MiB/sec or I/Ops, which I then use to compare configuration changes.