Scale 24.04 and the zfs.conf file

I am looking to do some performance tuning for my SCALE 24.04 installation. Searching the internet it appears that you can group many options inside the /etc/modprobe.d/zfs.conf file.

However, after me creating the file and setting the options and rebooting the server, the options do not change. From me reading, you must also issue the command “update-initramfs -u” every time you make changes to the zfs.conf file.

According to this user on reddit he appears to suggest that issuing the update-initramfs command is broken in 24.04.

My question is, is the zfs.conf a supported method to update many tuning options for truenas SCALE? And if so, will it survive version upgrades? Or should I place all the commands I want to issue in the init/shutdown scripts section as “commands” for post init?

You’re treating TrueNAS like a vanilla OS. It’s not. It’s an appliance.

If you really want to modify certain ZFS module parameters, you can manually read/set them with cat/echo.

An example of reading the value:

cat /sys/module/zfs/parameters/zfs_bclone_enabled

And to set the value:

echo 1 > /sys/module/zfs/parameters/zfs_bclone_enabled

To make such a change persistent, you need to add it as a Pre-Init command, which is done via the GUI.

so if I wanted to update 5, 10, 20+ ZFS parameters, the correct solution is to go into the GUI and manually create 20+ entries for each command? That seems strange.

Also, for ZFS parameters, the guides I read for truenas show them being post-init commands, but you said pre-init. Which should I select?

You could create a custom script that does this, keep it somewhere on one of your main pools, and then invoke it with a Pre-Init “Script” (instead of single “command”). Same menu in the GUI.

Keep in mind that it will likely have your bug reports closed (for good reason) if your system starts exhibiting problems.

Probably won’t make much of a difference, but to set a ZFS module parameter does not require waiting for anything further in the boot process once the zfs module is already loaded.

It depends how early after the system boots do you want these parameters to be applied.

Would the custom script just have the same syntax as the commands to issue?

echo 1 > /sys/module/zfs/parameters/zfs_bclone_enabled
echo 429496729600 >> /sys/module/zfs/parameters/zfs_arc_max

will creating and executing these scripts as the admin user be sufficient? do I need to do any chmod or permissions to make the scripts usable for truenas during boot?

Yes, but it should be set as root/wheel ownership, for consistency.

Probably best to lead with the typical crunch-bang stanza, such as:

<do stuff here>

Nothing special. Just readable and executable by root is the minimum requirement. (The dataset on which it lives must have exec=on.)

If you don’t want to give it executable permissions (nor do you want the dataset to have exec=on, then you can invoke the script with an executable, such as sh /mnt/pool/path/to/ In this case, you would set it as a “Command” in the GUI, not as a “Script”.

Any such commands/scripts invoked by this method are always executed by the root user.

This will soon be unnecessary in the next patch-release of Dragonfish.


can the scripts be placed inside the admin users home directory?
Will that survive upgrades?

I’ve seen some threads where people put them on the data pools somewhere, and others that suggest to put them inside the admin users home directory (which I like since it’s a bit cleaner).

Do you mean the user “root”? If so, then “yes, but with a catch.” You will lose the script if you ever replace the boot-pool, even if you have a backed-up “config” file.

They should. Mine have on Core, which live under /root/, but I have a copy of everything elsewhere.

1 Like

No, I mean the admin user. “/home/admin”
root user is disabled and everything done in 24.04 via webui and shell is as the admin user.

I’m not sure. I don’t use SCALE, and I do not know the implications of such.

With Core, there’s the “root” user with /root/.

Personally I’d recommend any scripts be stored on the storage pool.


Or some such location, you can reference those as post-init scripts easy enough and not potentially lose them if you have your boot device die. They can be backed up with your pool, etc.


@kris with SCALE “who” executes these scripts configured under “Startup Tasks?”

It’s not “root” under-the-hood? Is it the first non-root admin user that is created post-installation?

No, it is still root under the hood. Middlewared (the service) runs as root by necessity and it would be the thing kicking off those startup tasks.


Admin user is just a user, with admin rights.

Not sure how a conf file in his/her home directory is going to do anything.