i could modify the group memberships in the 24.10 environment, removed smb from user emby and enabled mdns. i will try to upgrade again from that environment ?
EDIT:
so it worked, now the shares mount under 25.04 after cleaning up in 24.10.2.1 and re-migrating
I got two errors now but the rest have cleaned up:
(WARNING) SystemDatasetService.sysdataset_path():113 - /var/db/system: mountpoint not found
[2025/05/01 00:09:51] (ERROR) PoolService.recursive_mount():221 - Failed to mount datasets for pool: âdata_volume_backupâ with error: âcannot mount â/mnt/data_volume_backup/nas_home/sharedâ: failed to create mountpoint: Read-only file system\nâ
root@truenas[~]# ls -ld /var/db/system
drwxr-xr-x 16 root root 18 Nov 5 09:36 /var/db/system
root@truenas[~]# mount | grep /var/db/system
data_volume/.system on /var/db/system type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/cores on /var/db/system/cores type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/nfs on /var/db/system/nfs type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/samba4 on /var/db/system/samba4 type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/configs-646f8dae97d646cc8946ddeb0ca79d97 on /var/db/system/configs-646f8dae97d646cc8946ddeb0ca79d97 type zfs (rw,relatime,xattr,noacl,casesensitive)
data_volume/.system/netdata-646f8dae97d646cc8946ddeb0ca79d97 on /var/db/system/netdata type zfs (rw,relatime,xattr,noacl,casesensitive)
Maybe it is a timing issue? i set the system dataset to the pool with the data_volume
Your official documentation used to say that an admin user should be added to builtin_users and builtin_administrators in the section âCreating an Admin User Accountâ:
Looks like that part was changed in 24.10 and the only reference to builtin is:
Click Save . The system adds the user to the builtin-users group after clicking Save .
Thatâs different from adding the user to groups like âsssdâ or âsystemd-journaldâ. The OS has quite a few internal system groups that shouldnât be mucked around with.
My âboot pool statusâ still says freenas-boot so pointless to wonder about the hows or whys. My system goes back through years with many upgrades inc cross system upgrades.
The shares are working now, having fixed the group memberships before the migration. (i have never consciously put the user in those groups, so canât explain how or when it may have occurred.
my concern now is;
why i am getting the warnings that the system dataset canât be mounted (but logs, etc all work fine)
why i am getting the r/o error mounting the backup dataset which is only used for replication from the main dataset.
For clarification for other people looking at updating. If you have users belonging to built-in groups other than these you may run into an issue in the future:
Would seem like a reasonable thing for the system to alert the user about on upgrade, particular as the list of allowed built-in groups is evidently known to the system (and therefore consequently, those which are not).
We periodically tighten up validation. Honestly, this is correcting some overly-lax validation and was not expected to cause an issue. Making local users members of system accounts (for affected user it was systemd-journald group) is a very bizarre administrative choice.
It was certainly lucky that the boot environment switching was introducedâŚreasonably recently from memory. But I donât recall ever having changed those group memberships myself. there would be no reason to
Thatâs actually a reason to run TrueNAS virtualised. Assuming regular snapshots and backups, super easy then to roll back the whole VM whenever an upgrade goes wrong and no need to rely on boot environments etc.
If ten years ago is âreasonably recently.â Free/TrueNAS has had this feature since the boot device has been a ZFS pool, which started in 2014 with the release of 9.3. Even before that, there was a âSlot A/Slot Bâ boot arrangement where you could boot into the last release.
Memory can be a fickle thing.
Selectable boot environments have been around for many years.
To avoid falling into the memory trap myself I went back to check the documentation and found that boot environments were present in every SCALE version and at least as far back as CORE 12.0, I didnât check further back.
The only way systemd-journal could have been added is if you did it yourself, perhaps by mistake. It happens.
It is all relative⌠when youâre someone who started with C/PM on z80 equipment and M/PM on Digital Research machines p.s. my system was FreeNAS before the boot device was ZFS. It migrated to;
â ZFS boot device
â Core â Scale
â 25.04
Never been rebuilt or reinstalled. Canât complain after all these years, however the user id got into those groups. (but I do note that root was missing from the built-in administrators group. so i had to add it first, and then remove my id)
Good practice would be to make a configuration backup before you upgrade. If it goes bad, you can reinstall the previous version from its ISO and restore the configuration backup. I have done this on a few occasions - initial testing, changing from redundant SSDs to a single M.2 NVMe, etc. Worked flawlessly and never compromised my pools.
An appliance of this nature is better run baremetal than virtualised, in my humble opinion. No need to consider device passthroughs, and other complexities that are introduced in a VM arrangement.
Yeah, pros and cons. I ran TrueNAS bare metal many years but have over time virtualised almost everything incl TrueNAS. For me the benefits outweigh the drawbacks (e.g. increased complexity in some parts). YMMV.