Going insane, truenas webUI dies, again

I think this is a saner solution than outright disabling swap. The last thing someone wants their NAS server to do is crash or have applications fail due to OOM.

1 Like

Perhaps my tongue was not firmly in my cheek enough.

The 2nd guy was me btw.

1 Like

But setting it zero does disable (according to what I found) causing OOM.

And setting it to 1 doesn’t work.

1 Like

Why can’t we just have nice things? :cry:

2 Likes

That what I was trying to say (eliminating all swap space typically bad idea) but as always never communicate terribly well. I know people from freeBSD think of swap differently, and for good reason! It might be ok until they find the issues for this webui/truenas issue as a mitigation thing, I probably would 0 swappiness (or maybe swapoff) (now that I know they changed it) if it solved my issues until fixed (but use a smaller arc as I don’t like risk). I would re-enable once fixed though. Definitely wouldn’t eliminate the paritions. But hey, doesn’t happen as I never load the x.0 releases anyway. I see they changed what 0 swappiness means now! I’m too old. Everything always ends up “in the old days…”.

Setting to 0 should not eliminate all OOM as you could actually be OOM with no swap. Maybe it solved some issues though. I just don’t want a system crashing due to OOM is the way I think of it. That’s bad, can corrupt even.

Setting to 1 SHOULD reduce any use of swap space (unless you are undersized on memory of course), less likely to have any used swap space. But probably not terribly useful for this particular problem. I always set swappiness less than 60 in the old days on Debian and ubuntu.

But at least I learned something new today! Can’t teach an old dog new tricks huh? Here’s to hoping IX can find the answer as to what actually (whether 1 thing or several things) so all can go back to normal for those who updated and have the problem. :crossed_fingers:

2 Likes

What does everyone think of this?

I use zram on my Raspberry PI 5 desktop. I have 32GB zram and 8GB ram and it’s fast (an advantage of not using an overpowered machine is for debugging or seeing how well something actually performs). No idea how it interacts with ZFS, never tried it. I have been impressed with zram. I guess for low memory machines it might be a loss and it’s taking away some ram. For larger memory machines it’s percentage wise a tiny amount so who cares. Not sure about in the middle but it’s an interesting idea. It’s for sure FAST.

But if nothing ever gets “swapped”, the zram remains at 0% size. It only dynamically grows/shrinks based on swapping in and out. (As far as I understand.)

Yes, I believe that is correct. I’m just saying if you do need to use the swap, it might be worse for low memory machines (as far as this problem). Not sure, didn’t ponder it much. But that is how zram works.

Wow, learned 2 things in one day. I just looked and archwiki says one of the most common uses is as swap, I never used it as swap, I use it more for in memory storage like caches and the like. I have to quit learning today, I can only take so much any more. :older_adult:

1 Like

Great, we have a solution. Now, for newbies, how do you proceed to set swappiness to 1 exactly

can you link the source where setting swappiness to 0 causes OOM? From what I found swappiness to 0 just means unless absolutely necessary and system is running out of physical RAM it won’t swap, it should still start swapping if system deems physical memory is exhuasted.

I think it’s viable to try, but has the small possibility to end up diasterous in looped back mem. assuming my understanding is correct. I think swappiness 0 is safer than zmem approach; besides, “regular” swaps shouldn’t cause the issue we are observing, if that’s the case that we are experiencing very unusual swap, then that means zmem will be constantly swapping as well.

eh… i am not sure swappiness is the workaround atm, but fyi:
/proc/sys/vm/swapiness
I might be wrong.
what’s known to be working is limit arc to 50%

1 Like

This is my main concern, since there might exist a “perfect storm combination” of Linux + ZFS + ARC configured to mimic FreeBSD + constantly swapping

It’s the “constantly swapping” that seems to be the canary in the coal mine. Currently, it means thrashing your SSD (or HDD). With zram, it probably would mean constantly compressing / decompressing in RAM (whereas a reasonable “limit” to the zram should be enforced).

So in the meantime there seem to be two temporary solutions:

  • Reset the ARC’s high-water to Linux’s default at 50% of RAM
  • or, completely disable swap

I believe there might be a solution that can incorporate both: Figure out why SCALE is aggressively swapping under normal loads, and rather than just disabling swap altogether (which could result in OOM for some use-cases), have instead zram to fill in the role of swap, which shouldn’t (hopefully) be used much, if at all. (But at least it will be there as a safety cushion to prevent OOM if the system is nearing its limits.)

3 Likes

Thank you but how do I get there from/in the CLI… I’m a newbie here.

I do not believe SWAP was not an issue in COBIA. My system was swapping even then with 50% of free RAM. It’s a simple 16GB home file server with 2 users with requests once an a while. I saw lot’s of swapping whenever I was copying several files to it.

Interestingly mine is using precisely 50% (64Gb) without doing anything.

I’ve found an okay solution is to do the following:

  • Set zfs_arc_max to 90% of my memory (121437160243)
  • Set zfs_arc_sys_free to 8GiB (8589934592)
  • Swapoff -a

Even under high memory pressure this seems to be working well to avoid OOM conditions and will just flush ARC when available system memory drops below 8GiB to get itself back within limits. I don’t use a massive amount of memory for services (31 of 120GiB as of right now) so leaving 8 or even 16GiB free is good for me. Though I imagine it could work with less.

I’ve been using this that I picked up from somewhere, probably the old TrueNAS forums at some point, and modified a little.

#!/bin/sh

PATH="/bin:/sbin:/usr/bin:/usr/sbin:${PATH}"
export PATH

ARC_PCT="90"
SYS_FREE_GIG="8"

ARC_BYTES=$(grep '^MemTotal' /proc/meminfo | awk -v pct=${ARC_PCT} '{printf "%d", $2 * 1024 * (pct / 100.0)}')
echo ${ARC_BYTES} > /sys/module/zfs/parameters/zfs_arc_max
echo zfs_arc_max: ${ARC_BYTES}

SYS_FREE_BYTES=$((${SYS_FREE_GIG}*1024*1024*1024))
echo ${SYS_FREE_BYTES} > /sys/module/zfs/parameters/zfs_arc_sys_free
echo zfs_arc_sys_free: ${SYS_FREE_BYTES}

swapoff -a
4 Likes

getting swapped isn’t an issue, right now in dragonfish there’s abnormal swapping causes issue like web UI freezing, 100% cpu usage, very hot SSD, throttled transfer etc… you can change the values in CLI but it’s not going to be presistent. If you want to be presistent go to system advanced settings ,postinit command and add there. So each reboot truenas will auto apply that.

1 Like

I believe this is just a coincident. The ARC cache by default in dragonfish is using RAM -1GB like freebsd. That being said in actuality it will actually not use until 1GB less. It will leave around 5~10%. Seeing you are using 45GB in service, if you reduce your service app etc actual usage, you are very likely to see your ARC go up. The 15GB ish free is expected behavior for default dragonfish dynamic arc allocation.

Then how do we revert to 50% where should I do that and how to get there?