ZFS ARC using more than zfs_arc_max

Under what conditions is the ARC total memory usage allowed to bypass what is configured in the zfs_arc_max sysctl tunable?

My zfs_arc_max is configured to 10737418240 under System > Advanced Settings which is 10 GiB (not GB). Yes, the system has been rebooted since configuring. The value can also be confirmed with:

root@freenas[~]# cat /sys/module/zfs/parameters/zfs_arc_max
10737418240

This occurred recently during a period of time when there was high memory pressure due to a docker container using more memory than anticipated. I don’t have any real diagnostics or data to provide proof of this happening, all I managed at the time was this screenshot. Regardless of that fact, I still don’t understand why ZFS would use more than the zfs_arc_max.

This post is very similar to this recent post also stating ZFS used more ARC than what was specified in zfs_arc_max, but the difference is he said it never worked in the first place. And I’m not using any H/W transcoding or anything special related to the iGPU. No dedicated GPU.

Here’s the current docker stats:

CONTAINER ID   NAME                 CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
3a9cfa7d768b   transmission         5.59%     44.17MiB / 512MiB     8.63%     546GB / 1.46TB    1.39TB / 1.05MB   13
4b90b313203d   sonarr               0.05%     333.5MiB / 512MiB     65.14%    2.79GB / 756MB    33.1GB / 0B       27
9a5430c59005   radarr               0.05%     209MiB / 512MiB       40.82%    4.18GB / 656MB    16.8GB / 12.3kB   27
6de345503e64   jellyfin             0.13%     2.301GiB / 4GiB       57.53%    0B / 0B           94.6GB / 745kB    26
6557c5d0eb07   ffmpeg               0.00%     10.02MiB / 18.55GiB   0.05%     2.21MB / 90.2kB   189GB / 80MB      2
4d80bca1d1d4   ix-dockge-dockge-1   0.15%     172.7MiB / 300MiB     57.55%    4.63MB / 16.4MB   1.21GB / 0B       41

The max setting has and does work on mine, and this issue has only occurred twice recently.

Thanks for your time.

  • TrueNAS Scale 25.04.2.1
  • This is a VM on ESXi 7
  • Not over-allocating memory on the hypervisor
  • 19 GiB total memory, all memory reserved
  • 2 pools, one is ~26 TiB the other is ~4 TiB
  • Am using the OS built-in wireguard VPN
1 Like

I don’t have an answer for you… but maybe a clue.

  • Could the GUI’s “ZFS Cache” number be combined with ARC & outstanding write transactions?
  • Or other ZFS internal memory, in addition to the ARC?

This would be questions for iX developers.

I swear it goes back to doing whatever it wants whenever you start/stop a VM & you have to manually set back the max value

Could this be the sort of question we take to the OpenZFS repository as an issue? Maybe the contributors there have some answers.

I am using this solution - still working:

68719476736 = 64GB
you can calculate your desired usage into bytes and replace this.

command in text:

echo 68719476736 >> /sys/module/zfs/parameters/zfs_arc_max
1 Like

I just noticed that you have zfs_arc_max and zfs_arc_min set to the same value. I did not think that was valid.

Suggest you set zfs_arc_min to something less than zfs_arc_max and see if the behavior changes.

1 Like