RAM Size Guidance for Dragonfish

Curious, what value does this command yield?

cat /sys/module/zfs/parameters/zfs_arc_max

I removed my postinit value from cobia before updating to Dragonfish and it 0 for me on dragonfish with default settings.

What does this reveal?

arc_summary | grep "Max size"

And how much physical RAM is available to the OS?

Total available Memory is 62.7 GiB

1 Like

Interesting. So this “tweak” isn’t simply changing the parameter’s value upon bootup. They must have modified the ZFS code itself for SCALE?

Because “0” is the “operating system default”, which for upstream OpenZFS for Linux is 50% of RAM. However, even though you’re using “0” for the default… it’s set to exactly 1 GiB less than physical RAM. (AKA: The “FreeBSD way”.)

@bitpushr, do you find any relief to these issues if you apply this “fix”, and then reboot?

Confirm the change is in effect (after you reboot) with this command:

arc_summary | grep "Max size"

Simply outputs “0”

Take a look.

What about this?

I know it will require a reboot, so whenever it’s convenient for you.

Have set it, there’s 64GB in my system as well so just copied the command from the post you linked and set it to Post Init, but can’t reboot the system currently and will then have to observe system behaviour for at least 24hrs after changing it to see if there’s any differences.

1 Like

Don’t do that! The user has 128 GiB of RAM, not 64!

You need to calculate what 50% of your RAM is to use for that value.

1 Like

Ah, true, nice catch. Done. Will chime back in probably in a couple of days’ time once I’ve found a window to reboot the system and give it a day or two to observe.

1 Like

Thanks for this.
This will help to see if there are any specific patterns for when the issue occurs.

1 Like

Just chiming in to update: have just rebooted the SCALE system after applying the previously recommended changes.

Running "arc_summary | grep “Max size” gives a result of ‘Max size (high water): 16:1 32.0 GiB’

I’ll now leave the system completely unattended, as I normally would, for at least 24hrs before looking back at it and chiming back in.

1 Like

62 Truecharts apps and 256 GB RAM.

Update to DF yesterday and I’m having terrible performance issues, pretty much the same as it was on Cobia. I cleaned all my snapshots and my system is stable apart of the apps. Editing and Saving an app is taking a lot of time. Often the whole ui crash and is stuck for 5 minutes on the login loading page.

I have migrated another server with fewer apps and 32 GB RAM and this server is snappy responsive.

Please IX, hear me out: Fix the rubbish gui. It doesn’t make any sense with 256 GB and 72 cores to have something that feels such broken.

Just seeing this reply to my post now. FYI it now shows as “34360000000”, can reply once the post-init workaround is removed in the future if needed.

After upgrading to Dragonfish my system would lock up completely and stop responding to network and console when running Cloud Sync jobs (even a “dry run” would cause a crash). I am syncing with Backblaze and have the “–fast-list” option checked (don’t know if that makes a difference though).
Limiting ARC to 50% solved this, and the system now seems to be running stably again.
This is a ten year old server that have been absolutely stable through all upgrades of Free-/True-nas. It has 16GB RAM and is running a number of docker containers (mariadb, grafana, nextcloud influx, plex etc.).

1 Like

Dropping my own thread in as it looks to be related to swap as well. The issue with the boot drive can be ignored as it’s unrelated and I think that’s my own fault for rebooting the system so readily (though it certainly was an odd issue).

As I mentioned in the most recent post, swap usage is way up compared to Cobia, so I’m playing around with ARC limits to ensure I don’t hit this. I’ve also temporarily outright disabled swap to avoid running into issues while I work :stuck_out_tongue:

@kris @Captain_Morgan

Is zram a viable alternative to outright disabling swap?

I’ve had success with it on Arch Linux (non-server, non-NAS), but I’m wondering if it would serve SCALE users well?

  • No need for a swap partition / swap-on-disk
  • Anything that needs to be “swapped” will remain in RAM in a compressed format (ZSTD)

So under ideal conditions, it never gets used. However, to prevent OOM situations, there’s a non-disk safety net that should theoretically work at the speed of your RAM.

I’m not sure if there are caveats of zram in the presence of ZFS, however.

2 Likes

Love zram. You’d still have to define where linux swap space is, but, it could use zram I suppose. It’s very very fast. It’s a souped up tmpfs of sorts.

Interesting!

1 Like