I would like to have some ZFS advice and / or reality check for performance and redundancy for this layout:
- Proxmox VE 8.3 boot partition:
- 3 x ZFS 500 GB SSD drives (Samsung 870 EVO)
- This will only be boot but occasionally I will store a VM on there.
The operating system must boot from this ZRAID1.
I need basic redundancy for the OS, but because I operate in a cluster this isn’t highest priority.
Write performance is pretty important. I’ve learnt with time that ZRAID1 is good for reads but the parity bit writing makes writes slower. I’m happy to use more disks if this will increase writing speeds.
Data Partition:
- 3 x NVMe drives (Samsung 990 Pro)
- TrueNAS Core ZRAID1 using iSCSI
- 10 Gbit/s network
Unfortunately the motherboard (Dell R530) doesn’t allow more NVMes so if I had to “optimize”, I’d have to revert to SSD or magnetic.
At first I though let’s just stick to what I have here, but after speaking to a ZFS specialist who mentioned all these terms below, I really got confused and starting feeling I’m out of my depth:
- Block sizes
- SLOG & L2 ARC
- RMW cycle misses
- ashift
I was hoping to “stick to defaults” where I could.
I don’t need to understand all lingo right now or become a ZFS expert, that will take time.
Instead, I just want a break overview if you think my configuration is okay or if there are obvious holes.
If there is no easy way to speed the disk writes using ZRAID1, then I’d like to know where I should be focussing my next installation if I want better write speeds.