RAM Size Guidance for Dragonfish

Evidence seems to be that there are two safe modes:

  1. ARC=50%… Swap can be enabled
  2. ARC = 9x%… Swap should be disabled

The 3rd mode
ARC = 9x%… SWAP = Enabled - seems to be sensitive to application’s use of memory.

This makes some sense… ZFS ARC hogs the RAM & forces apps and middleware into Swap space.

3 Likes

What about this, which we’re still waiting to hear some experiences:

  1. ARC Max = “FreeBSD-like” (Dragonfish default, RAM - 1 GiB) + swap enabled + “swappiness” set to “1”

EDIT: Or maybe not. It’s already not looking like a viable option.

I have swap enabled and at 70% or so (memory fails me), the “safe modes” would be system dependent of course but I get your point. That’s how it’s always been done outside of Truenas. Admin picks number. But you wanted to improve that to let it handle on it’s own. Without swap, I don’t see how you can have 0 OOM errors though when under memory pressure with arc filling ram. You must have a way to avoid that. It’s always been a problem in the old days.

ZFS does hog the ram of course. If ZFS takes it first, and something else needs it (which in the past was rare except for things like VMs), ZFS can’t evict fast enough meaning swap. If your swap space is on HDD, then, not fast at all. I know they made changes such as this and others with the arc:

There are still lots of OOM happening in openzfs. A partial list of interesting ones below (not ones about corruption as that is different), but, the proof of course is your distribution base and if swapoff resolves all of the issues without OOM, then, that’s ok I guess. You have allowed ARC to grow to almost memory size so this is much different than the way people have typically run zfs. Time will tell for sure!

While your enterprise customers who have no issues and all have plenty of memory, my concern would be for the armchair Plex/converted home 10 year old computer guys with very little memory. They can be the ones with plenty of overuse. I guess guidelines can be changed, etc. If swap is simply now incompatible with zfs that allows filling ram with arc, then, your installer shouldn’t be adding it or asking to add it anymore. But other than telling people here, what about the people who don’t use the forums, how will they know to disable swap or maybe you can make it part of an update?

2 Likes

We haven’t concluded on the issue. When we have, we’ll work out short term guidance and a technical solution for longer term safety.

1 Like

Not on FreeBSD though, so… how old?

1 Like

Right, so, we are on the Tag labelled SCALE, lol.

Yeah, but BSD and Linux share common origins… hence my question about how exactly old your “in the old days” referred.

Say what?

1 Like

Linux being inspired by Unix and BSD being originally based on Unix means they somehow have a common origin.

Anyway, my question was a masked “how far back was this happening?”.

That was more to play into me mocking myself earlier. But to answer, I am speaking before Linux systems allowing filling the arc to max memory instead of 50%. I see these pop up in ZFS discussion groups. So, not very old. While they have a common origin, as you know, memory management and zfs not even close to the same.

To clarify, still ambiguous, for those using > 50% manually (but obviously too much).

2 Likes

I thought it might be nice to have a possible good news story… which is also an interesting data point…

I have a very low end… very much below minimum specs backup server running TrueNAS Scale.

It’s an Intel Core 2 Quad with 4GB of RAM running off a pair of USB disks. Yes. I’m naughty, and don’t deserve any cookies.

Its been working well since upgrading to DragonFish 24.0.4 final.

Previously with Cobia you could see swap was always in use… after boot… but since upgrading to Dragonfish… 0 bytes. Heh.

Maybe its too early too say… hard to tell… since TrueNAS Scale only keeps 1 weeks worth of reporting (at least in Cobia)

Will keep an eye on this system… over time. It receives replications every hour.

The curious thing is that I do not have SMB or NFS services active on this system, only SMART and SSH.

It’s a replication target, that is all.

2 Likes

That’s really weird! I presume it’s a backup target system? Maybe you don’t have enough arc to cause the issue, what does the arc reporting look like?

I’m putting in Prometheus to capture data so I can keep what I want. Even with IX expanding the retention supposedly, I want more useful info like VM resources, Kubernetes app resources, etc.

yes, its a backup target.

ARC is still growing… we shall see :wink:


The 1 hour CPU chart to show it does experience some load :wink:

Impressive it survived the backup, assuming that’s what I see. So, the solution to the problem isn’t more memory, it’s less! :clown_face:

I have a 3GB backup ZFS target, but, it’s not Scale, just Debian.

2 Likes

So, I went into the UI of this backup system, and I started interrogating snapshots etc, sorting them, deleted a bunch of snapshots for the .system dataset that I’d taken accidentally, etc.

This triggered a bit of swap. Looking out at 1day view, it peaks at 543MB then drops to 130MB.

What I think is interesting is what the memory usage looked like when this happened. ARC didn’t really recede much, forcing “used” to get paged out as “free” dropped.

And zoom in on peak and after peak


“Cached” appears to be ARC, according to the dashboard numbers

It looks to me like it prefers to swap out than to lower cache. or the swapping occurs faster than the cache drops.

Don’t get me wrong, as it is, I don’t really care, the system was working fine, but if the issue is that cache is forcing swap to be used as the memory is full and the cache is not making way…

1 Like

at 1hr zoom, going back to the event…

Cached didn’t really budge… did it :wink:

ARC size (current):                                    34.5 %  999.4 MiB
        Target size (adaptive):                        36.9 %    1.0 GiB
        Min size (hard limit):                          4.2 %  122.6 MiB
        Max size (high water):                           23:1    2.8 GiB
        Anonymous data size:                          < 0.1 %  132.5 KiB
        Anonymous metadata size:                        0.1 %  796.5 KiB
        MFU data target:                               37.7 %  346.6 MiB
        MFU data size:                                 30.8 %  283.3 MiB
        MFU ghost data size:                                    59.0 MiB
        MFU metadata target:                           14.1 %  129.7 MiB
        MFU metadata size:                             13.6 %  125.5 MiB
        MFU ghost metadata size:                               146.8 MiB
        MRU data target:                               36.0 %  331.2 MiB
        MRU data size:                                  9.8 %   90.5 MiB
        MRU ghost data size:                                    54.5 MiB
        MRU metadata target:                           12.2 %  112.2 MiB
        MRU metadata size:                             45.6 %  419.5 MiB
        MRU ghost metadata size:                               199.7 MiB
        Uncached data size:                             0.0 %    0 Bytes
        Uncached metadata size:                         0.0 %    0 Bytes
        Bonus size:                                     0.5 %    4.9 MiB
        Dnode cache target:                            10.0 %  289.9 MiB
        Dnode cache size:                              12.1 %   35.0 MiB
        Dbuf size:                                      0.8 %    7.9 MiB
        Header size:                                    1.8 %   17.9 MiB
        L2 header size:                                 0.0 %    0 Bytes
        ABD chunk waste size:                           1.4 %   14.0 MiB

Where does the “Target size (adaptive): 36.9 % 1.0 GiB” come from? 25% of RAM? A hard floor?

It’s a game of juggling, constantly changing based on total RAM, free memory, non-ARC memory, ARC data/metadata requests, etc, to provide the best balance of efficiency (reading from RAM rather than the physical storage of the pool itself) vs. flexibility (enough slack for system services, processes, and other non-ARC memory needs.)

That target size can fluctuate anywhere between the min/max allowable values. ZFS is usually pretty good at actually storing in the ARC the amount of data defined by its target size. (As seen in your example.)

Another way to think of the target size: “If this is the general state of my system, then ZFS will aim and work for an ARC to be this size and stay that way.” Meaning that you’ll have more data/metadata evictions the smaller the target size is, and fewer evictions the larger the target size is. You can think of it as a “separate RAM stick with a defined capacity”. Of course, this imaginary “RAM stick” can also dynamically adjust based on many variables.

I figured it out, thanks to the power of A.I.!


arc-is-flash


Why does TrueNAS SCALE use flash-based storage to hold the ZFS cache?! Are you kidding me? :face_with_symbols_over_mouth: You’re supposed to keep the ARC in RAM. Now everything makes sense.

A.I. saves the day, once again! :partying_face:

3 Likes

FYI.

TrueNAS Engineering team is making progress on this issue and is testing some scenarios.

We expect to be able to recommend best mitigation and plans for a fix, on Friday.

5 Likes