L2ARC Size in 2025?

I have my motherboard maxed at 128gb ddr5 ecc ram. I have one main storage pool of twelve 24tb harddrives in Raidz3. This pool serves anything from Plex to Adobe photo and video storage. I have seen the 5x rule and others say 3x to 8x but some say rules have changed and some articles were years ago. What the general consensus in 2025? And should I mirror them or raidz? I have plenty of space in my tower.

1 Like

How critical is the data and how good is your backup strategy? Also, how performant does the share have to be? 12x24tb is a BIG pool, but the z3 helps mitigate that. 12 is a funky number, I would probably go raidz2x2 pools (that is still ~88TB per pool). I would not go mirrors as it can actually be less redundant and unless you are doing VMs or need really performant drives, I would not bother. I also would not bother with L2Arc unless you know you are over running your memory perry consistently. Truenas does not use a lot of memory on its own and you like have 100GB or so of cache with much faster memory.

1 Like

Some of it is critical as far as photos and video but I have a second offsite truenas server which I’m just setting up and have an external hard drive which contains another copy. I went with the raidz3 12 x 24 tb drives just because of the ease of a single pool but I have spares ready to go if needed. My system uses every gb of ram in there…that’s why I was wondering if I should use an nvme to help with plex or even the most edited or viewed photos in lightroom. All video editing with be done on a different all flash pool. What sizes and configurations is everyone using for L2ARC?

You are asking about L2ARC yet you haven’t posted any plain, ARC statistics? ARC and L2ARC are all about repeated reads on your system.

Start with the basics to understand ZFS and how it uses memory. If you just want to add in L2ARC and play with it, go ahead. You can remove it from your system without damaging your pools.

BASICS

iX Systems pool layout whitepaper

Special VDEV (sVDEV) Planning, Sizing, and Considerations

4 Likes

I was just trying to get a community consensus of what everyone is running. I have read a ton of info but in the future I’ll save all forum input for chastising beginners for not reading every article and actually understanding it.

1 Like

A lot depends on the use case. For example, my pool is mostly WORM, where few things change except the time machine backups. The pool is about 1/4 full and I can achieve 400MB/s transfer speeds pretty consistently with a single HDD VDEV and a sVDEV.

The sVDEV is not the only reason I have been able to increase write speeds from about 250MB/s to 400MB/s. Another reason was @winnielinnie, who helped me understand the benefits of larger recordsizes for datasets that mostly hold large files (i.e. images, videos, and linux ISOs, for example).

Large recordsizes (i.e. 1M) not only allow faster writes of such files into the pool, they also significantly cut down the metadata needed vs. when default 128kb recordsizes are used for a dataset. That in turn speeds things up. Record sizes should be tuned on a per-dataset basis. Datasets that are meant to store databases and like small files should feature smaller recordsizes.

A sVDEV can increase speeds two further ways, as long as the underlying disks are fast and redundant. Usually, a sVDEV consists of multiple, mirrored SSDs. It stores metadata and small files in separate partitions. Both metadata and small file writes benefit a lot from SSD use, so the pool in aggregate speeds up a lot, especially during writes.

A L2ARC can be helpful in similar ways by storing metadata and/or frequently-accessed files. Unlike sVDEVs, a L2ARC is expendable but it will take some time to fill up with stuff the ARC is kicking out or files that are accessed all the time. So it’s benefit will take a few rsyncs or like operations.

2 Likes

Awesome thank you. Any Size recommendation?

Hi Erik,

L2ARC is one of the most complex areas of ZFS and also one of the most debated. RAM (ARC) is significantly faster than disk be it HDD or SSD so it makes sense that ideally you want your reads coming from RAM where possible. I appreciate you’ve maxed out your RAM options on your mobo but that alone doesn’t automatically mean you would benefit from an L2ARC, in fact it could make things a little worse. I always suggest people considering L2ARC first take a look at arc_summary and feel free to post the output here. This will show you how effective your ARC is atm. Very roughly speaking I’d say a hit ratio of mid-high 90% means L2 is probably not worth it.

4 Likes

:point_up:
That. Assuming that your system has been up for some time, begin by posting the output of arc_summary (using formatted text, </> or two lines of three backquotes).
With 128 GB RAM, you may well find out that you do not need a L2ARC.

2 Likes

FWIW I’m using an OpenZFS storage server to host:

  • 2200 image files at roughly 1.4MB each = ~4GB
  • 1.2TB of Stable Diffusion checkpoints (up to 7GB per file)
  • 422k of small program files totaling 55GB.
  • A 1TB VMFS volume for a VMware lab. A bit of a wildcard but one small VM (~40GB) gets booted-up and shutdown a lot.

It has 16GB of RAM, 4x slow HDDs, and a 600GB L2ARC on SSD. After running for about two weeks the L2ARC hit rate was 70% and ARC hit was 98%. Had to reboot just now so the stats were wiped.

I reckon the question of L2ARC usefulness boils down to how you access your dataset. If you read a file once then not again for a long while you might not see much of a benefit. If there’s a core group of files with frequent accesses and said group is too large to fit in ARC (i.e. RAM) then an L2ARC can be very helpful.

In my case the toolsets are kinda shitty and many don’t maintain their own database of image metadata. At startup these tools hit all 2200 images to read their PNG metadata and having most/all of these images populated into L2ARC speeds this up dramatically. It’s a worst-case access pattern for spinning HDDs.

600GB works well for me. The images tend to remain cached in L2ARC as they’re accessed quite often. Perhaps 5GB of the program files are frequently accessed and opening these programs makes for a nice string of L2ARC hits. The Stable Diffusion checkpoints can be read 4-5x faster from L2ARC than from disk and half of them I rarely use. The other half are mostly in L2ARC by now and I’m enjoying a nice speedup here.

The one Windows virtual machine that I launch regularly has also found its way into L2ARC and it boots up 2-3x faster.

That thread linked above has more discussion you might find useful.

2 Likes

I suspect that no-one will condone me using three old USB memory sticks as L2ARC for the HDD in a decade-old HP ZBook with 32 G memory.

ZFS real-time cache activity monitor

Cache efficiency percentage:
           10s    60s    tot
   ARC:  98.54  98.19  98.19
 L2ARC:  96.20  95.74  95.74
ZFETCH:   4.99   7.36   7.36
^C
root@mowa219-gjp4-zbook-freebsd:~ # uname -mvKU
FreeBSD 15.0-CURRENT main-n277570-3267e0815e06 GENERIC-NODEBUG amd64 1500044 1500044
root@mowa219-gjp4-zbook-freebsd:~ # history 3
  2001  15:35   zfs-mon
  2002  15:36   uname -mvKU
  2003  15:36   history 3
root@mowa219-gjp4-zbook-freebsd:~ # 
Lazy details
grahamperrin@mowa219-gjp4-zbook-freebsd ~> geom disk list /dev/ada1 /dev/da0 /dev/da1 /dev/da2
Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r3w3e6
   descr: HGST HTS721010A9E630
   lunid: 5000cca8c8f669d2
   ident: JR1000D33VPSBE
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 15733161984 (15G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: Kingston DataTraveler 3.0
   ident: 08606E6D401FBF70A7038BEB
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 30943995904 (29G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: Kingston DataTraveler 3.0
   ident: E0D55EA573F0F4205983466A
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 15502147584 (14G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: Kingston DataTraveler 3.0
   lunname: PHISON  USB3
   lunid: 2000acde48234567
   ident: 08606E6B64C4E37037228BC9
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

grahamperrin@mowa219-gjp4-zbook-freebsd ~> zpool list -v august
NAME                  SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
august                912G   534G   378G        -         -    65%    58%  1.00x    ONLINE  -
  ada1p3.eli          915G   534G   378G        -         -    65%  58.6%      -    ONLINE
cache                    -      -      -        -         -      -      -      -         -
  gpt/cache1-august  14.4G  8.44G  5.99G        -         -     0%  58.5%      -    ONLINE
  gpt/cache2-august  14.7G  8.38G  6.27G        -         -     0%  57.2%      -    ONLINE
  gpt/cache3-august  28.8G  18.9G  9.96G        -         -     0%  65.4%      -    ONLINE
grahamperrin@mowa219-gjp4-zbook-freebsd ~> lsblk
DEVICE         MAJ:MIN SIZE TYPE                                    LABEL MOUNT
ada0             0:124 112G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  ada0p1         0:126 112G freebsd-zfs                           gpt/112 <ZFS>
  <FREE>         -:-   456K -                                           - -
ada1             0:130 932G GPT                                         - -
  ada1p1         0:137 260M efi                                   gpt/efi /boot/efi
  <FREE>         -:-   1.0M -                                           - -
  ada1p2         0:139  16G freebsd-swap                 gpt/freebsd-swap SWAP
  ada1p2.eli     1:24   16G freebsd-swap                                - SWAP
  ada1p3         0:141 915G freebsd-zfs                   gpt/freebsd-zfs <ZFS>
  ada1p3.eli     0:147 915G -                                           - -
  <FREE>         -:-   708K -                                           - -
da0              0:170  15G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da0p1          0:171  15G freebsd-zfs                 gpt/cache2-august <ZFS>
  <FREE>         -:-   304K -                                           - -
da1              1:35   29G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da1p1          1:36   29G freebsd-zfs                 gpt/cache3-august <ZFS>
  <FREE>         -:-   490K -                                           - -
da2              0:231  14G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da2p1          0:232  14G freebsd-zfs                 gpt/cache1-august <ZFS>
  <FREE>         -:-   1.0M -                                           - -
da3              1:43  932G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da3p1          1:44  932G freebsd-zfs                     gpt/Transcend <ZFS>
  <FREE>         -:-   712K -                                           - -
grahamperrin@mowa219-gjp4-zbook-freebsd ~> 

The three cache devices are usually much hotter (less than 1 G free). Relatively cool today because I did something extraordinary with the HDD.

I can’t guess what might be shown by less lazy details :slight_smile: but I know that there’s a terrific boost when all three are online and hot.

You really want to have your post turned into a meme, don’t you?

3 Likes

Only if I can provide a photograph of my cat sleeping on the keyboard when things get toasty during a scrub. Or treading on two of the three USB sticks in the thirteen-year old dock (imagine a Tonka toy without wheels, in HP grey).

1 Like