Honestly, that’s not a good sign. It means that “swappiness = 1” doesn’t work as advertised. It shouldn’t be swapping at all, unless the RAM is truly at its very limit and cannot release anything.
That won’t really indicate much, since it’s the constant swap in/out that is the cause for the slowdowns. (In other words: Acting like it’s a low-RAM system, even though it’s not.)
Keep an eye out for sluggishness and slowdowns, even if the “used swap” remains relatively low.
Not quite 24 hours, but it’s close enough. swapoff -a has definitely resolved the slow-down I was experiencing. It’s really hard to tell with these things, but it seems CPU utilization has dropped as well. It was never really ‘high’, but it seems lower.
This graph is kind of difficult to see, but notice how the iowait (purple) was higher until I disabled swap ~2 days ago. The couple spikes were of course when I rebooted to test without the other sysctl changes:
Swap usage before/after disabling. No surprise that it’s no longer being used, but unsure why it was ever used considering the ‘mean’ of 40gb free RAM:
Used to be, but modern versions of the Linux kernel treat “0” as “disabled” and “1” as the way “0” used to behave.
EDIT: Oh, I see what you mean.
I wrote “It shouldn’t be swapping at all”, because there is no reason for their system to resort to swapping, given the amount of RAM and memory pressure. The value of “1” advertises as “Swap used only if absolutely necessary.” But that’s clearly not the case, from what we can observe.
My guess is it has to do with ZFS/ARC and Linux. (Swap is still in the pre-ARC days.)
Yes, that’s the “supposed” behavior, but it’s not working as expected. There is no emergency, yet it still wants to swap to disk, when there’s plenty of free RAM. (See my post above, I edited it.)
Well, I don’t see it, was the system rebooted after changing the setting? It depends how it was changed whether it took effect or not. Also, what was the source of the new swappiness behavior, where did you find it was changed? I wonder if that might be on a newer kernel than Scale is using? Everything I am reading says it’s the same as it’s always been? Which means 0 means smallest possible swap (but not none). If that is the case, it might still swap even with 0.
One thing that is interesting is that prior to disabling swap, my boot drive was getting hit pretty hard. (See What is writing to my boot drive? - TrueNAS General - TrueNAS Community Forums) As soon as I turned off swap, that quit hammering the boot disk. With swappiness at 1, the boot disk is still pretty darn quiet. So far, it seems that just changing swappiness to 1 from 60 has been a good move.
I think the way forward will be to disable swap though based on reading the other thread with IX comments. They’ll have to change the memory recommendations. Not for you though! Interesting experiment though. See how it is in a day or so. Report back.
Are you SURE swap is back on? The mdstat output didn’t show any swap partitions. Maybe that changed in Dragonfish, wouldn’t think so. What does free -m show?
Still, this is what it should do. Shouldn’t be swapping unless a good reason. I’m interested in seeing the results, thanks for testing!
This is exactly the behavior I am seeing. Been running Freenas/Core for years, currently virtualised in VMWare 8.0 with HBA passed through with triple mirror vdev. Spun up a new VM, with latest Dragonfish 24.04 a few days ago, works for maybe 24 hours before GUI becomes sluggish, but unusable in a large file transfer. File transfers are incredibly fast after a fresh reboot, 550MB/s+, but after a few hours the rate is all over the place and I have seen it as low as 30MB.
Unrelated to this, but I also experienced a scary moment where my passphrase enrypted pool would not unlock, following install of an app, and the creation of the ix-application pool, getting the message:
'/mnt/tank' directory is not empty (please provide "force" flag to override this error and file/directory will be renamed once the dataset is unlocked)
Luckily it looks like a GUI issue as it unlocks fine in Core. I have reverted back to Truenas Core for now - will monitor the bug fixes and try again on a future release.