I’m trying to balance between performance, capacity, and risk, for different workloads and storage needs. Below is what I have in mind for the different pools and storage configuration and just want to confirm this would be ok. This setup serves a small company of 20 employees, not a homelab setup.
All-flash vSAN cluster - high performance VMs
zpool 1: 4 vdevs of 2-way mirrors (8 SAS disks) - NFS datastore for low performance VMs. (I figured the small random read/writes from VMs could benefit from a higher number of vdevs.)
zpool 2: 2 vdevs of 2-way mirrors (4 SATA disks) - NFS repository for all Veeam VM backups. (I’m not sure about this setup. Do Veeam backups consist of large sequential files or small read/write operations like VMDK files? Maybe 1 vdev of 4 disk raid-z1 is sufficient? Higher capacity is for longer backup retention.)
zpool 3: 1 vdev of raid-z2 (8 SAS disks) - SMB file storage and sharing. (Mostly static files that won’t get significant read/writes, so I opted for higher capacity over performance.)
zpool 4: 1 vdev of raid-z1 (5 SATA disks) - ZFS snapshot backups of all 3 zpools above. (Just a backup pool so doesn’t need a high level of redundancy or performance.)
We currently have 2 TrueNAS instances for these 4 pools, but they’re within the same cluster. Another offsite TrueNAS instance is planned for a future deploy.
Yes I just now realized that for the NFS datastore that the VMs live on. But that’s what the Veeam backups (zpool 2) are for, then the backup of the backup lol (zpool 3).
I’d think 2-way mirror with backups for our VM NFS datastore would be a better solution than using the drives for a 3-way mirror with no backups?
I don’t believe iX Systems will provide guidance, as these were self-built servers. This is our setup at a colocation datacenter: