Hi all, I’m just starting to dip my toe into TrueNAS. I have a home server that was configured for Freenas way back but ended up just installing Unraid and now I’m circling back.
I’m trying figure out what the best option is for raid configuration and performance. I have a 2.5 gigabit network and ideally would like to fully saturate my network speed. I think my sata data drives are prob the limiting factor when it come to speed. I know I can use a cache drive for write speed but for read speed is there any configuration other than striped that will boost my read performance?
You’ve got it backwards, there’s a read cache called l2arc. Then there’s the slog, a drive you can add to speed up sync writes, which won’t get used if all you’re using is smb since smb is asynchronous.
My understanding is that mirrored vdevs provide the best all round performance. My pool with 2x mirrored vdevs of spinners easily saturates a 2.5GB link doing large sequential reads.
Depending on the workload, a 64GB RAM system in a home labs will service many reads out of ARC and those reads will be limited by wire speed.
Nice! So I think the ticket for me should be to use mirrored vdevs. I didn’t realize that the mirrored vdevs act as a striped pair when reading so that’s awesome. I have 6 4TB drives so sounds like I should be able to have 3 mirrored vdevs and that should give me more than enough speed to saturate. As you mentioned I can then use ARC and my nvme drive as L2ARC to further help that as well.
The document linked is a great resource, but it provides the theoretical performance.
Part of performance tuning of zpool configuration depends on your specific data and workflow. But, as rules of thumb, assume for write operations that performance scales as the number of top level vdevs*, for read operations performance scales as the number of drives involved. Note that choice of configuration will also effect resilver time and that has an effect on long term data loss (if you have N+1 redundancy (2-way mirror or RAIDz1) then the longer the resilver takes when a drive fails the larger the chances of a second drive failing and loss of data, I know real world 2-way mirrors are not that simple). Resilver performance is generally one drive’s worth of performance per vdev. So if you are resilvering a RAIDz vdev, it will run at the rate of a single drive. See ZFS Resilver Observations – PK1048 for some very old observations about resilver times.
Note that adding too much L2ARC will actually slow things down. Every L2ARC entry needs a pointer in the ARC, so if your ARC to L2ARC ratio is too high you will end up using the ARC just to store L2ARC pointers. Better to take the $$$ you are going to spend on an NVMe for L2ARC and spend it on main systems RAM instead.
Also note that TrueNAS calls the SLOG device a LOG vdev and the L2ARC a CACHE vdev.
*top level vdevs I say this and not just vdevs becuase there are vdevs that do not directly store data, such as SLOG and L2ARC, and I am not referring to them, just to the vdevs that make up the data portion of the zpool.