I’m running 4, 8TB drives with 32gb ram and 8 core cpu for a home theater system. What size 2.5 SSD drive(s) are recommended for cache? Thanks.
Your use case does not appear to need a cache at all. (And “cache” in ZFS probably does not mean what you think…)
I see, so you’re saying with this setup, used primarily for movies, an SSD isn’t recommended?
Mostly.
ZFS’ L2ARC / Cache is not really suitable for read once in a while files. This L2ARC / Cache is not used for read ahead, that’s already done in RAM, (aka L1ARC).
Under unusual conditions a L2ARC / Cache can be useful for media libraries when doing massive automatic updates to the media file’s description files. Like extracting summaries, year released, etc… from an Internet site and populating your media library with that data.
Those cause ZFS metadata updates, which means reading that ZFS metadata first before updating. Having a copy of that ZFS metadata in L2ARC / Cache could help in those rare times. There is a setting, secondarycache=metadata, that can be used for that purpose. Plus, some other tunables that I don’t have handy, could help.
But, the overall consensus is that a more complicated pool layout is not always better from a support stand point for the average home user.
I’ll only do one mass migration of about 12.7TB (900+ movies) onto the Mini X+. Each movie is less than 50GB. So, it doesn’t seem I need any SSD drives. If I notice an issue, I can always add one later, correct?
Allow me to quibble a little bit. It really depends on the use case whether L2ARC is useful or not. On top of that, it’s also a matter of how the L2ARC is configured.
For example, when I used to have a metadata-only L2arc here, it sped up rsync operations for backups by 12x. This is for a NAS that is mostly WORM, where metadata doesn’t change much, and hence where an L2ARC can do wonders for a machine with just 32GB RAM.
Another reason to implement a metadata only L2ARC on a machine with limited ram and a large pool is speeding up directory browsing. This works best for WORM like use cases, as changed metadata first has to lead to a L2ARC “miss”, followed by the L2ARC perhaps picking up the change (it’s not a given).
I have yet to see a good use case for the regular file-side L2ARC. I tried using my L2ARC for just files once I implemented a sVDEV and found zero benefit. There may be a use case, I just don’t know what it is once you have a properly-configured sVDEV.
The L2ARC will only help on read operations, it does zero for writes. ZFS doesn’t do tiered caching like bcache, and all I can say is “great” given my experiences with bcache and how it borked one of my pi’s out of existence.
Smart caching has been discussed before but I reckon that the industrial users that pay for iXsystems rely on more durable mechanisms.
Thanks for the explanation and experience. So, I understand you to say that for playing movies, my primary activity, where a movie/file may go years without access, L2ARC isn’t needed, nor recommended with 32GB of RAM?
Look at your stats for ARC, etc in the CLI. Conventional advice was upgrade RAM to 64GB and then consider adding L2ARC. Some now say 32GB is a good starting point to consider. I have played with a 120 GB SSD for L2ARC with only 18GB of RAM. Mostly because Scale didn’t keep as much data in ARC as Core did and I wanted to test it.
If you want to play with a L2ARC device, you can always add it to your pool and remove it later. Don’t over size as L2ARC has to store some data that takes away from regular ARC. You can play with the settings for the L2ARC and make it metadata only, like Constantin talks about. If you don’t like it or don’t see big performance improvements, you can always remove it. Adding and removing the L2ARC does not require pool destruction, like sVDEVs, might.
sudo arc_summary
Thanks, so given the 32GB of RAM and 32TB of storage, what size SSD(s) do you recommend? There are two SSD bays.
Would you run an Apps or VMs in the future. Two SSD bays could be used for a Mirror VDEV pool for Apps or VMs. Pool could also be used for something that need more IOPS than your current HD pools could provide.
If playing with L2ARC, just keep the drive on the small side, 128 - 512. You could go with larger and under provision but there is no easy way to use the GUI to do that. The drives provided by TrueNAS (iX Systems) were that way for L2ARC or SLOG. Looked like a 64GB drive but was actually 480GB
The TrueNAS Tech Talk T3 covered L2ARC and SLOG a bit here. There was another show where I asked about provisioning a larger device to a smaller size, I think.
Pending the output of arc_summary, 0 GB (i.e. no L2ARC).
32 GB RAM is comfortable for your relatively small pool, I suspect that metadata is all in ARC already—provided you keep the NAS on.
Thanks for the video link. It’s interesting that it only has 38 views, including me. If you’re going to use L2ARC, they recommend the fastest NVMe drives like the WD Black series, not consumer grade SSDs. But, I’m not sure how to integrate a drive like that as it’s not the size of the 2.5" MiniX bay. I’m not an expert like others here, so I appreciate the help.
Thanks, I’m not sure I need L2ARC either.