How much memory for ZFS cache?

So, I’ve the latest (24.04) truenas scale set up and running with a 12TB main pool, a 22TB back end replication pool and 256GB of memory.

Just trying to launch a VM and the server has only 12GB memory available as the ZFS cache is using 218GB of memory!

Is this expected behaviour? is there any way to limit this? It seems to start off ok but over the week the usage has climbed. The server hasnt been doing much this week bar serving some media to the TV downstairs and I’ve uploaded about 25GB of data at most.

The main pool is about 90% full though, trying to clear it down but its the snapshots taking up the space…

Any ideas on why this is happening and what can be done to bring the usage down?

Ta,

Stu.

Yes. Unless it prevents the VM from starting.

cant get the vm’s to run as the system just complains on memory availability…

I thought the ARC was capped at 50% of system RAM? Should be able to check the setting via a quick $ cat /sys/module/zfs/parameters/zfs_arc_max

And overwrite that as you like, e.g. echo “$((128 * 1024 * 1024 * 1024))” …
or via sysctl.

For a lasting value add the sysctl entry, vfs.zfs.arc_max, to System > Tunables.

No, it isn’t. That was the case in SCALE Cobia and before, but it was one of the headline changes in Dragonfish to allow ARC to consume (almost) all RAM–which it should be able to do; why would you want to pay for dozens of gigabytes of RAM, only to leave them unused?

But, of course, that depends on ARC being evicted when the RAM is needed for other purposes. And if that isn’t happening, that sounds like a bug.

3 Likes

Did you try with the “memory over commitment” check box ticked?

2 Likes

Some applications, (and this applies to Solaris 11 ZFS too), check for free memory before launching. If their is not enough free, they abort claiming there is not enough memory.

What they should be doing is ASKING for the memory they need. Then ZFS would free up ARC as needed and the application would get what it requested.

Sounds like a bug in the VM software.

This is one reason why I did not think the new change to ZFS ARC memory for Linux was going to work perfectly. Most FreeBSD and Solaris applications are well behaved, and understand how to request memory. But others just don’t do the right thing.

2 Likes

I wonder if having a lot of swap enabled would help in that scenario?