TrueNAS 25.04.0 now available!

i could modify the group memberships in the 24.10 environment, removed smb from user emby and enabled mdns. i will try to upgrade again from that environment ?

EDIT:
so it worked, now the shares mount under 25.04 after cleaning up in 24.10.2.1 and re-migrating

I got two errors now but the rest have cleaned up:

(WARNING) SystemDatasetService.sysdataset_path():113 - /var/db/system: mountpoint not found

[2025/05/01 00:09:51] (ERROR) PoolService.recursive_mount():221 - Failed to mount datasets for pool: ‘data_volume_backup’ with error: “cannot mount ‘/mnt/data_volume_backup/nas_home/shared’: failed to create mountpoint: Read-only file system\n”

in relation to the first error, i get the below

root@truenas[~]# ls -ld /var/db/system
drwxr-xr-x 16 root root 18 Nov  5 09:36 /var/db/system
root@truenas[~]# mount | grep /var/db/system
data_volume/.system on /var/db/system type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/cores on /var/db/system/cores type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/nfs on /var/db/system/nfs type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/samba4 on /var/db/system/samba4 type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/configs-646f8dae97d646cc8946ddeb0ca79d97 on /var/db/system/configs-646f8dae97d646cc8946ddeb0ca79d97 type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/netdata-646f8dae97d646cc8946ddeb0ca79d97 on /var/db/system/netdata type zfs (rw,relatime,xattr,noacl,casesensitive)

Maybe it is a timing issue? i set the system dataset to the pool with the data_volume

Your official documentation used to say that an admin user should be added to builtin_users and builtin_administrators in the section “Creating an Admin User Account”:

Looks like that part was changed in 24.10 and the only reference to builtin is:

Click Save . The system adds the user to the builtin-users group after clicking Save .

Is that the correct group name?

That’s different from adding the user to groups like “sssd” or “systemd-journald”. The OS has quite a few internal system groups that shouldn’t be mucked around with.

My “boot pool status” still says freenas-boot so pointless to wonder about the hows or whys. My system goes back through years with many upgrades inc cross system upgrades.

The shares are working now, having fixed the group memberships before the migration. (i have never consciously put the user in those groups, so can’t explain how or when it may have occurred.

my concern now is;

  1. why i am getting the warnings that the system dataset can’t be mounted (but logs, etc all work fine)

  2. why i am getting the r/o error mounting the backup dataset which is only used for replication from the main dataset.

Thank you.

For clarification for other people looking at updating. If you have users belonging to built-in groups other than these you may run into an issue in the future:

Would seem like a reasonable thing for the system to alert the user about on upgrade, particular as the list of allowed built-in groups is evidently known to the system (and therefore consequently, those which are not).

Oh, and maybe warn about empty smb passwords too.

95% of all users will not be reading this thread.

There are a lot of things like that. iX apparently figures that putting it in the release notes is enough.

1 Like

I can’t find a reference to this in the release notes, did you?

We periodically tighten up validation. Honestly, this is correcting some overly-lax validation and was not expected to cause an issue. Making local users members of system accounts (for affected user it was systemd-journald group) is a very bizarre administrative choice.

I understand, preventing special groups from being added does seem sensible.

My issue is that it wasn’t, as far as I can tell, communicated and it leaves the users who managed to do something silly in the past in a bind.

After looking at the code I now at least know I’m not in danger since I only added builtin_users, builtin_administrators and apps, which are all OK.

It was certainly lucky that the boot environment switching was introduced…reasonably recently from memory. But I don’t recall ever having changed those group memberships myself. there would be no reason to

That’s actually a reason to run TrueNAS virtualised. Assuming regular snapshots and backups, super easy then to roll back the whole VM whenever an upgrade goes wrong and no need to rely on boot environments etc.

If ten years ago is “reasonably recently.” Free/TrueNAS has had this feature since the boot device has been a ZFS pool, which started in 2014 with the release of 9.3. Even before that, there was a “Slot A/Slot B” boot arrangement where you could boot into the last release.

4 Likes

Memory can be a fickle thing.
Selectable boot environments have been around for many years.

To avoid falling into the memory trap myself I went back to check the documentation and found that boot environments were present in every SCALE version and at least as far back as CORE 12.0, I didn’t check further back.

The only way systemd-journal could have been added is if you did it yourself, perhaps by mistake. It happens.

It is all relative… when you’re someone who started with C/PM on z80 equipment and M/PM on Digital Research machines :wink: p.s. my system was FreeNAS before the boot device was ZFS. It migrated to;
→ ZFS boot device
→ Core → Scale
→ 25.04

Never been rebuilt or reinstalled. Can’t complain after all these years, however the user id got into those groups. (but I do note that root was missing from the built-in administrators group. so i had to add it first, and then remove my id)

You did not have to do that, builtin_administrators was not the issue, only the systemd-journal group membership.

I disagree.

Good practice would be to make a configuration backup before you upgrade. If it goes bad, you can reinstall the previous version from its ISO and restore the configuration backup. I have done this on a few occasions - initial testing, changing from redundant SSDs to a single M.2 NVMe, etc. Worked flawlessly and never compromised my pools.

An appliance of this nature is better run baremetal than virtualised, in my humble opinion. No need to consider device passthroughs, and other complexities that are introduced in a VM arrangement.

3 Likes

Yeah, pros and cons. I ran TrueNAS bare metal many years but have over time virtualised almost everything incl TrueNAS. For me the benefits outweigh the drawbacks (e.g. increased complexity in some parts). YMMV.

5 Likes