TrueNAS Recommendation for Proxmox

Hello dear community, I am pretty new to TrueNAS, but im thinking of using it as a iSCSI Storage for my Hypervisors.

Currently I have 20 x HPE 1,92TB SAS SSDs, which I am trying to continue using.

For CPU I was thinking about using 2 x Intel Xeon-S 4509Y which gives 32 logical cores.

RAM wise, I was thinking something from 512GB to 1TB.

Network is 25 Gigabit.

From my (basic) understanding of ZFS, I think I should be setting sync to disabled, and a ZIL would not neccessarly be required.
Layout wise I was thinking about two vdevs with 10 drives in raidz2.

My question to you is, if you think that this specs would be sufficient, or if would need to add more RAM or NVMe SSDs for caching.

It would also be awesome if you could give me tips to acchieve the best performance here.

Thank you very much in advance for your help.

Not a good plan. For block storage you want IOPS, and for IOPS you want vdevs. Striped mirrors are the way to go. Even then I doubt you’ll come close to saturating 25 GbE.

1 Like

I get that, but that would mean going from around 30 TB usable to aroung 20 TB, right?
25 GbE is just available, I dont think I will be able to reach that.
Do you think a L2ARC would improve performance here?

Closer to 10 TB, really; you wouldn’t want it more than about 50% full.

I’d doubt it. I’d expect VM storage to involve mainly random reads/writes, and no cache is going to do much to help those.

See:

3 Likes

Alright, unterstood. Thank you very much.

I am currently getting these speeds with the drives:

  • Seq Read: 2196 MB/s
  • Rnd Read: 212 MB/s
  • Seq Write: 1172 MB/s
  • Rnd Write: 121 MB/s
    They are currently in a StorageSpaceDirect Config.

Do you think I would be able to acchieve at least those values even though it is a sub-optimal configuration?

I have a different take.

  1. mirrors would be ideal, the primary reason is that block storage and raidz don’t work well because the stripe size is much larger with raidz

  2. performance should be good. 20 HDs in mirror can saturate 10gbe, 20 SSDs can hopefully do 2.5x more bandwidth.

  3. ARC works well for VM loads, as the VMs themselves are not “random”

  4. L2ARC is not going to be a benefit unless it is significantly faster than the arrays performance, instead it will be a bottleneck.

  5. As the drives are enterprise drives, I don’t think a SLOG will be needed