Build requirements for VM backing storage

I’m looking at migrating my XCPng cluster from local storage using m2 drives to shared storage using u.2. I currently primarily use TrueNAS for bulk storage and I know that the constraints are going to be different. I’m planning on using NFS but I can go iSCSI or something else if the performance will be better.

I know I need to use mirrors for IOPS but in terms of CPU, do I want more cores or faster cores?

Do I need a SLOG if I’m using u.2 mirrors? Metadata vdev?

How limited will I be with 10G? Will moving to 25G significantly increase my hardware requirements? I assume jumbo frames and separate storage network are a given.

More RAM is always better, but should I be considering L2ARC?



Resource - The path to success for block storage | TrueNAS Community
Resource - Why iSCSI often requires more resources for the same result | TrueNAS Community

If you do iSCSI you likely want to use a SLOG: find something with good endurange and great performance at mixed operation… usually optanes are the most common reccomendation here.


Some insights into SLOG/ZIL with ZFS on FreeNAS | TrueNAS Community
SLOG benchmarking and finding the best SLOG | TrueNAS Community

I do not suggest the use of metadata vdevs. Maxing out the RAM before going L2ARC is the suggested approach; do not consider L2ARC at all until you have at least 64GB of RAM.

About the network, it really depends on how many drives are you going to use… basically it boils down to your use case: you will be hardly limited by a 10Gbps network in a homelab, but you might be in an enterprise environment. Fiber over Base-T helps reducing the latency, thus increasing the performance.


10 Gig Networking Primer | TrueNAS Community
Resource - High Speed Networking Tuning to maximize your 10G, 25G, 40G networks | TrueNAS Community