Metadata VDEV impact is "not noticeable" / it "does not perform as expected"

FWIW, I’ve had very good experiences with both a sVDEV as well as a metadata-only, persistent L2ARC. The former requires planning, etc. the latter results in somewhat slower metadata performance than a sVDEV but is redundant. A metadata only, persistent L2ARC is only advisable when RAM exceeds 64GB, however and there are some limits re: the size of SSD to dedicate to L2ARC vs. available RAM (L2ARC pointers cut into ARC RAM).

Storing metadata on SSDs has a significant performance benefit when it comes to tasks that include a lot of directory traversals like rsync backups. I doubt you’d see a significant benefit unless the pool is filled at least 25%, as a very small amount of metadata will simply get read into the ARC followed by the ARC doing all the work, not the SSD.

Even fuller pools may feature very little metadata if you consolidate large collections of files with tools like Apple sparsebundles. My pools metadata clocks in somewhere around 0.03% of pool capacity, 1/10th the rule of thumb of 0.3%. But that’s because most of the files are relatively ‘large’ images and videos. Your use case and file types may be quite different.

Lastly, the other reason to go sVDEV (which sets aside 75% of drive capacity for small files, 25% for metadata by default), is that the HDDs are rid of all files that are below the small file cutoff that you can set on a per-dataset basis. That in turn allows you to fine-tune what files / datasets get the full SSD benefit (i.e. databases, VMs, etc.) whereas for others only some small files will (archives). See the sVDEV resource for more info.

4 Likes