L2ARC Size in 2025?

I have my motherboard maxed at 128gb ddr5 ecc ram. I have one main storage pool of twelve 24tb harddrives in Raidz3. This pool serves anything from Plex to Adobe photo and video storage. I have seen the 5x rule and others say 3x to 8x but some say rules have changed and some articles were years ago. What the general consensus in 2025? And should I mirror them or raidz? I have plenty of space in my tower.

1 Like

How critical is the data and how good is your backup strategy? Also, how performant does the share have to be? 12x24tb is a BIG pool, but the z3 helps mitigate that. 12 is a funky number, I would probably go raidz2x2 pools (that is still ~88TB per pool). I would not go mirrors as it can actually be less redundant and unless you are doing VMs or need really performant drives, I would not bother. I also would not bother with L2Arc unless you know you are over running your memory perry consistently. Truenas does not use a lot of memory on its own and you like have 100GB or so of cache with much faster memory.

1 Like

Some of it is critical as far as photos and video but I have a second offsite truenas server which I’m just setting up and have an external hard drive which contains another copy. I went with the raidz3 12 x 24 tb drives just because of the ease of a single pool but I have spares ready to go if needed. My system uses every gb of ram in there…that’s why I was wondering if I should use an nvme to help with plex or even the most edited or viewed photos in lightroom. All video editing with be done on a different all flash pool. What sizes and configurations is everyone using for L2ARC?

You are asking about L2ARC yet you haven’t posted any plain, ARC statistics? ARC and L2ARC are all about repeated reads on your system.

Start with the basics to understand ZFS and how it uses memory. If you just want to add in L2ARC and play with it, go ahead. You can remove it from your system without damaging your pools.

BASICS

iX Systems pool layout whitepaper

Special VDEV (sVDEV) Planning, Sizing, and Considerations

3 Likes

I was just trying to get a community consensus of what everyone is running. I have read a ton of info but in the future I’ll save all forum input for chastising beginners for not reading every article and actually understanding it.

A lot depends on the use case. For example, my pool is mostly WORM, where few things change except the time machine backups. The pool is about 1/4 full and I can achieve 400MB/s transfer speeds pretty consistently with a single HDD VDEV and a sVDEV.

The sVDEV is not the only reason I have been able to increase write speeds from about 250MB/s to 400MB/s. Another reason was @winnielinnie, who helped me understand the benefits of larger recordsizes for datasets that mostly hold large files (i.e. images, videos, and linux ISOs, for example).

Large recordsizes (i.e. 1M) not only allow faster writes of such files into the pool, they also significantly cut down the metadata needed vs. when default 128kb recordsizes are used for a dataset. That in turn speeds things up. Record sizes should be tuned on a per-dataset basis. Datasets that are meant to store databases and like small files should feature smaller recordsizes.

A sVDEV can increase speeds two further ways, as long as the underlying disks are fast and redundant. Usually, a sVDEV consists of multiple, mirrored SSDs. It stores metadata and small files in separate partitions. Both metadata and small file writes benefit a lot from SSD use, so the pool in aggregate speeds up a lot, especially during writes.

A L2ARC can be helpful in similar ways by storing metadata and/or frequently-accessed files. Unlike sVDEVs, a L2ARC is expendable but it will take some time to fill up with stuff the ARC is kicking out or files that are accessed all the time. So it’s benefit will take a few rsyncs or like operations.

2 Likes

Awesome thank you. Any Size recommendation?

Hi Erik,

L2ARC is one of the most complex areas of ZFS and also one of the most debated. RAM (ARC) is significantly faster than disk be it HDD or SSD so it makes sense that ideally you want your reads coming from RAM where possible. I appreciate you’ve maxed out your RAM options on your mobo but that alone doesn’t automatically mean you would benefit from an L2ARC, in fact it could make things a little worse. I always suggest people considering L2ARC first take a look at arc_summary and feel free to post the output here. This will show you how effective your ARC is atm. Very roughly speaking I’d say a hit ratio of mid-high 90% means L2 is probably not worth it.

4 Likes

:point_up:
That. Assuming that your system has been up for some time, begin by posting the output of arc_summary (using formatted text, </> or two lines of three backquotes).
With 128 GB RAM, you may well find out that you do not need a L2ARC.

2 Likes

FWIW I’m using an OpenZFS storage server to host:

  • 2200 image files at roughly 1.4MB each = ~4GB
  • 1.2TB of Stable Diffusion checkpoints (up to 7GB per file)
  • 422k of small program files totaling 55GB.
  • A 1TB VMFS volume for a VMware lab. A bit of a wildcard but one small VM (~40GB) gets booted-up and shutdown a lot.

It has 16GB of RAM, 4x slow HDDs, and a 600GB L2ARC on SSD. After running for about two weeks the L2ARC hit rate was 70% and ARC hit was 98%. Had to reboot just now so the stats were wiped.

I reckon the question of L2ARC usefulness boils down to how you access your dataset. If you read a file once then not again for a long while you might not see much of a benefit. If there’s a core group of files with frequent accesses and said group is too large to fit in ARC (i.e. RAM) then an L2ARC can be very helpful.

In my case the toolsets are kinda shitty and many don’t maintain their own database of image metadata. At startup these tools hit all 2200 images to read their PNG metadata and having most/all of these images populated into L2ARC speeds this up dramatically. It’s a worst-case access pattern for spinning HDDs.

600GB works well for me. The images tend to remain cached in L2ARC as they’re accessed quite often. Perhaps 5GB of the program files are frequently accessed and opening these programs makes for a nice string of L2ARC hits. The Stable Diffusion checkpoints can be read 4-5x faster from L2ARC than from disk and half of them I rarely use. The other half are mostly in L2ARC by now and I’m enjoying a nice speedup here.

The one Windows virtual machine that I launch regularly has also found its way into L2ARC and it boots up 2-3x faster.

That thread linked above has more discussion you might find useful.

2 Likes