Hi,
I have failed to find any recommendation regarding recordsizes for datasets that’s only used for storing vSphere vmdk-files and the documentation doesn’t mention much about this that I could find.
So - has anyone got any general recommendations or is the default 128kb decent enough? I’m mainly interested in performance, and any space savings from compression is secondary.
The vmdk-files themselves are anywhere from a few gigabytes to a few hundred gigabytes, and is a mix of pretty standard Windows and Linux application-servers, domain controllers and so on - no real databases (of significance) or file-servers.
Probably all of them have the default 4kb cluster/blocksize for NTFS and ext4.
The dataset is shared via NFS and using mirrored nvme-disks if that makes any difference.