Hi,
I cannot offer details or benchmarks just a subjective impression;
Running Core the cache was always full but since updated to scale it is not the case, Is somebody able to clarify ?
Thanks & Best
Hi,
I cannot offer details or benchmarks just a subjective impression;
Running Core the cache was always full but since updated to scale it is not the case, Is somebody able to clarify ?
Thanks & Best
Until a certain scale version there was a hard limit of 50%. This was a zfs on linux limitation. I believe 24.10 was the version that removed the 50% limit. Now arc behaves as it does on core.
Here’s an example of my memory usage on 25.10.2.1
Edit: It was 24.04 that changed arc behavior to be the same as core
I have mentioned that is behaves differently. I seem to stay at about 50% of RAM as ARC with how I use TrueNAS. It also likes to flush out ARC from RAM for no apparent reason, sometimes. My system is idle overnight and it does an ARC flush? The adaptive ARC is a bit odd to me.
Eh, not really. Closer, yes, but definitely not the same. Here’s my (main) 25.04 system, with about two months’ uptime:
And my backup system on 25.10.2:
Hmm i wonder what’s different with my system. That picture is with 7 days uptime after the last truenas update
How is your system used? Mine is mostly idle unless I am syncing files over SMB from my Windows 11 computer. I use robocopy to backup / sync about 13 TB of files.
There are no Apps nor VMs. Mine is just storage.
1lxc running 30 apps including jellyfin, nextcloud paperless-ngx the arr stack vaultwarden and some smaller apps…but most of the day when I’m at work it’s also idle
I’m not sure if you were asking Lars or me, but my main system is general storage, media storage, Plex and Jellyfin, along with several other Compose stacks. Dockge and Tailscale are the only official iX apps; the rest are Compose stacks managed by Dockge (and more recently Dockhand). Time Machine and Veeam target, etc.
The backup system is, as yet, only a replication target. Very little ARC there is unsurprising.
Here just smb and nfs…
The cpu load is def. higher compared to core…maybe the arc freq. has changed ?
My Arc Size graph looks quiet different
Your graph reminds me on Core times and was the reason for my post…![]()
I installed that yesterday afternoon, and this morning, I see this:
Encouraging. Let’s see if the periodic ARC flushes have gone away.
Keep us updated pls.
Hmm. It seems to be fundamental different how the underlying operating systems—FreeBSD (CORE) and Linux (SCALE)—manage memory with ZFS. It looks like an ongoing task to improve the behaviour….copied that, but I start to questioning the rule of thumb regarding max ram first with zfs under linux….
These flushes can probably be mitigated with big enough zfs_arc_min. Although, it requires some planning…