TrueNAS Scale for SoHo usecases

@Whattteva Generally speaking, you’re not wrong. Your point that NVME is still an order of magnitude slower than RAM isn’t lost on me here. Nor I think on OP who’s tag line reads
Architect at Red Hat and long time Linux and Open Source user.
I think he’s aware. :slight_smile:

But, to @Steven_Ellis’s point, the “pain” felt by swap is no where near as high when we’re talking about NVME. Just purely in terms of latency, NVME is an order of magnitude faster than spinning drives. We’re talking 100s of microseconds vs 10s of milliseconds.

Regarding the issue at hand, the speed has nothing to do with the reason swap is off by default now.

When I deploy Linux VMs for XYZ application, I typically give them swap roughly equal to the amount of RAM I give them. If your webserver for an app suddenly gets hit hard, you’d prefer your webapp start swapping to disk. The result is worse performance, but you’re still serving your data.

TrueNAS SCALE isn’t a simple web app, and it’s not even a simple file server. It’s got all sorts of processes competing for RAM, whether it be SMB, KVM workloads, Kubernetes Apps, or the big one, ZFS ARC.

If you take a look at this thread: Dragonfish swap usage/high memory/memory leak issues - #62 by kris

You’ll find the reason swap was disabled has nothing to do with the relative speed of devices. It was a perfect storm of unpredictable, weird problems. In the case of what @Steven_Ellis did, it looks like a sane use of swap.

In my own limited testing before 24.04.2, I noticed that my system would start swapping for no reason, even though I had free memory, even though I had set swappiness=1. Keep an eye on it.

1 Like