Does that work? There’s an Issue on the ZFS GitHub repo requesting basically this as a feature. L2ARC metadata refresh by zpool scrub · Issue #16416 · openzfs/zfs · GitHub
Now I’m curious to understand scrub
a bit better myself.
crawl the entirety of the HDD three times with rsync or some similar application. If the new L2ARC is persistent and metadata=only, that should do it.
Only a portion of the data that’s eligible for eviction from ARC is considered for L2ARC. With a ton of ARC nothing hot may ever become eligible for eviction, and those subsequent reads may just be served from the ARC.
There are a couple tunables that can help move more data to L2ARC (l2arc_noprefetch
and l2arc_headroom
and l2arc_write_boost
and friends) but with enough RAM a big ARC may not get much eviction pressure. L2ARC metadata caching partially broken · Issue #15201 · openzfs/zfs · GitHub
For what it’s worth, the property
primarycache
is often overlooked, but can be quite useful in certain circumstances. For my torrent “seeding” dataset, I set it toprimarycache=metadata
to prevent it from pointlessly attempting hold any blocks in the ARC, which theoretically lessens the overall pressure on the ARC, in favor of much more important data and purposes.
This can cause significant read amplification - are you confident it’s improving things without side effects? A nice explanation here: metadata caching does not work as expected - repeatedly getting lost in arc and l2arc with primary|secondarycache=metadata · Issue #12028 · openzfs/zfs · GitHub