Pool, Zvol, iSCSI sizes

Hardware: 2 x EPYC 7532 (64 cores), 512Gb ram, HBA connection to JBOD enclosure.

Configuration:
8x4TB in pool as 4 x MIRROR | 2 wide | 3.64 TiB.
4x1TB (nvme) in pools as 1 x MIRROR | 4 wide | 894.25 GiB

Would I be better (for performance) for the nvme to have them as 2 x mirror 2 wide?

Let me apologies now for the next question . . . which is a perennial “ask”
I have configured my second pool with a Zvol of 698.85GB (81.2%). Then created an iSCSI share of 500GB.
While I’ve read older and newer posts provisioning storage seems tricky - as I presume part of the problem is one size doesn’t fit all?
I’d appreciate thoughts on the Zvol size - should I better sliding just under 80% or go much lower?
Does the provisioned size of the iscsi come into play or is that irrelevant in relation to zvol size.

The more vdevs the better. So 2 disk mirrors x 2 vdevs would be better than 4 disk mirror x 1 vdev.

The general guidance with zvols and iSCSI seems to be to keep the pool to 50% however I speak from no personal experience regarding this.

No experience here either but the 50% guidance is for HDDs, to prevent excessive fragmentation; SSDs, being much better at random access, should be able to go over that, but you’re on your own to find out what would be the practical limit.

1 Like

Sorry - perhaps a bit stoopid here . . .
So a pool of 1Tb - Zvol = 500Gb - iSCSI = 250Gb
or
a pool 1Tb - Zvol = 500Gb - iSCSI = c. 500Gb

I don’t think it matters so long as the pool is kept below the 50% mark. Although take note of what @etorix said about SSDs.

The 50% rule for block storage is explained here:

Note: it has to do with the cost of a seek. As such, SSDs may obviate the problem.

ALSO, the above link shows why 90% is an issue, but more so above 90% (or is it 95%) utilization ZFS changes it’s free space finding algorithm, which further slows you down.