That’s really weird! I presume it’s a backup target system? Maybe you don’t have enough arc to cause the issue, what does the arc reporting look like?
I’m putting in Prometheus to capture data so I can keep what I want. Even with IX expanding the retention supposedly, I want more useful info like VM resources, Kubernetes app resources, etc.
So, I went into the UI of this backup system, and I started interrogating snapshots etc, sorting them, deleted a bunch of snapshots for the .system dataset that I’d taken accidentally, etc.
This triggered a bit of swap. Looking out at 1day view, it peaks at 543MB then drops to 130MB.
What I think is interesting is what the memory usage looked like when this happened. ARC didn’t really recede much, forcing “used” to get paged out as “free” dropped.
It looks to me like it prefers to swap out than to lower cache. or the swapping occurs faster than the cache drops.
Don’t get me wrong, as it is, I don’t really care, the system was working fine, but if the issue is that cache is forcing swap to be used as the memory is full and the cache is not making way…
It’s a game of juggling, constantly changing based on total RAM, free memory, non-ARC memory, ARC data/metadata requests, etc, to provide the best balance of efficiency (reading from RAM rather than the physical storage of the pool itself) vs. flexibility (enough slack for system services, processes, and other non-ARC memory needs.)
That target size can fluctuate anywhere between the min/max allowable values. ZFS is usually pretty good at actually storing in the ARC the amount of data defined by its target size. (As seen in your example.)
Another way to think of the target size: “If this is the general state of my system, then ZFS will aim and work for an ARC to be this size and stay that way.” Meaning that you’ll have more data/metadata evictions the smaller the target size is, and fewer evictions the larger the target size is. You can think of it as a “separate RAM stick with a defined capacity”. Of course, this imaginary “RAM stick” can also dynamically adjust based on many variables.
Why does TrueNAS SCALE use flash-based storage to hold the ZFS cache?! Are you kidding me? You’re supposed to keep the ARC in RAM. Now everything makes sense.
Give me a good excuse to tell Mrs. Potatohead - I need more ram to run the next version of truenas. Yes - I need to upgrade to the next version for security updates!
We can confirm that the significant contributor to the issue that has been reported in this thread and others is related to a Linux kernel change in 6.6. While swap adjustments proposed can mitigate the issue to some degree, the multi-gen LRU code in question will be disabled by default in a subsequent Dragonfish patch release, and should resolve the problem. This can be done in advance following the directions below from @mav. Thanks to all of the community members who helped by reporting the issue which aided in reaching the root cause. Additional feedback is welcome from those who apply the change below.