ZFS cache preventing app from getting allocated memory it needs

I’m running Scale v25.04.2.5with 94 GiB RAM.

I executed a query in Open WebUI and received error message: 500: model requires more system memory (4.9 GiB) than is available (4.6 GiB)

I logged into the TrueNAS web UI and saw memory usage:

  • Free: 4.6 GiB
  • ZFS Cache: 70.0 GiB
  • Services: 19.4 GiB

Open WebUI is configured to use up to 40 GiB, but was only using 930 MiB.

Questions

  1. What’s going on here? I’ve allowed Open WebUI to use plenty more RAM than it needed, and the system clearly had it with 70.0 GiB in cache.
  2. My understanding was that TrueNAS would free up RAM from the cache for apps to use if requested. Is this not the case?
  3. What can I do to ensure my apps get the memory they need from cache if there isn’t enough free RAM?

What’s going on is that you’re witnessing how Linux + ZFS is superior to FreeBSD’s terrible and outdated memory management. :flexed_biceps:

“Try adding a tunable bro. Restrict how much memory the ARC is allowed.”

This is the way. :tada:

Change arc size-----------------------
echo #number in bytes here#>> /sys/module/zfs/parameters/zfs_arc_max

You’ll have to basically put in that command after every boot and each time you stat/stop a VM. (Edit: Might be easier to simply have a cron every hour or something)

@winnielinnie’s memeing is honestly pretty true - there is something that feels like linux doesn’t free up arc memory as quickly as freebsd did. Am I smart enough to prove or fix this? No, so I just work around it. Should it behave this way? Likely not.

The same thing happens in Solaris 10 & 11. If the application checks for currently free memory, it gets what is free. If it attempted to simply allocate the memory and check the return code for success, it would succeed. ZFS would release enough ARC entries, in the proper order, (like any good cache), to not only allow the application it’s requested memory. But, also to keep the minimum free memory.

Now I do agree that FreeBSD’s memory management of ZFS ARC memory is superior to Linux, unfortunately people think Linux is the way to go…

1 Like

Wouldn’t it be useful to have a setting in the Web UI that allows one to set such a limit, rather than having to do such a hack? I mean, even if in the background it does do that hack for the time being, it would be much cleaner from a user’s point of view, and the implementation could change as time goes by, without affecting the UI :thinking:

You’d think the underlying operating system would be able to handle this gracefully in the background, without requiring arbitrary hacks by the user.

3 Likes

It would, but this is expected to “just work”. When things don’t do that, sadly we gotta go into the weeds.

If you want, in the UI you could setup a chron job to just tell arc not to go over that limit every [insert your prefered time value here].

Is there a formula to get reasonable figures for what to limit the cache to?

Left by itself, ZFS cache size slowly creeps up to take up all available memory, at this point, it probably contains nearly the entire pool (newly set up system, and ZFS cache is at 73.9GB)

In particular, is there a low water mark one shouldn’t go below without risking stability or have massive performance penalties?

Half of physical memory was a watermark that at some time was even hard coded into Linux’ ZFS implementation. That limit was removed when it seemed like dynamically adjusting to memory pressure would finally be working in Linux.

You can always estimate the memory requirements of your VMs and apps and apply some simple math. Anyway as a quick shot from the hip half of the memory still seems like a good starting point to me.

HTH,
Patrick

2 Likes

That was my thinking - I left ARC at 50% max & adjusted my VMs an Apps accordingly to leave a few gigs free even if ARC & VM/Apps all went 100% full on their targets.