I would adjust record size by dataset / purpose. As per the sVDEV resource, consider 1M record sizes for large media files like images, videos, and like content.
Smaller record set sizes make more sense for databases and VMs. You can dramatically reduce the need for metadata that way and help speed things up by allowing more continuous writes and less time getting spent calculating metadata.
In my instance, I reduced the metadata needs of the pool by 4x and sped up writes from about 250MB/s sustained to 400 via sVDEV and recordsize adjustments. So I definitely recommend adjusting record size by media / data type. It saves space and speeds things up, at ZERO cost.
Now, is it possible that larger small file cutoffs could be beneficial in your use case? It’s entirely possible, but with your default setting of 128k recordsizes, it’s impossible to know the amount of actual 128k data in the pool since it’s commingled with data that’s 256k+ in size.
To get ZFS to tell you, you’d have to change the record sizes (to higher #s) of various datasets, rebalance the pool, and rerun the ZFS command to see what the actual 128k file size content is.
So if keeping larger files in the sVDEV is important to you, I’d definitely buy more than the bare minimum sVDEV capacity. For that case, I’d consider 2TB since the cost is relatively low and it buys ample room to grow. As for where to set the small file cutoff, it’s variable by dataset / use case.
For example, you may have a dataset where you’d want to keep it 100% on the sVDEV due to speed / IOPS / whatever (ie virtual machines). For that use case, setting the small file cutoff to the same size as the record size for that dataset ensures the vm data stays 100% on the SSDs of the SVDEV.
Similarly, if you have a photo archive and most of the files are huge but a few small files are in there for thumbnails, etc. you should set something like a 1M record size for the data set and a 64k or maybe 128k small file cutoff to speed up the little stuff.