Updating from 24.10.1 to 24.10.2 caused breakage

Luckily, I’m still testing Truenas scale prior to purchasing hardware.

Unluckily, I’ve seen issues twice when upgrading from 24.10.1 to 24.10.2.

The first time, Truenas device was unable to talk to my FreeIPA domain after the upgrade. I had to remove it from the domain and rejoin. Once that happened KRB security on NFS shares no longer worked. Home lab, so no big deal …

Started over. Thought about upgrading first but did the domain join on 24.10.1 again to see if I could replicate the issue. Indeed I could! This time, after the upgrade reboot not only is the domain broken, but my storage pool was offline. I wasn’t shocked as I had defined this pool via the CLI and then imported it … but as I understand it this is allowed for complex pool configurations.

Here’s the error:

IPA default_config file is missing. This may indicate that an administrator has enabled the IPA service through unsupported methods. Rejoining the IPA domain may be required…

That is a blatant falsehood; the IPA domain join was performed per these instructions: [ link to Truenas docs redacted … seriously?!? :smiley: ]

I’ll be nuking this VM shortly and trying again. This time I’ll apply the upgrade package first and then perform the domain join.

Long story short, I am concerned about promoting this system to be my production (albeit home) NAS if I’m going to have to perform major work every time there’s a patch. Please tell me this is not usual behavior?

Just occurred to me to also try installing 24.04 and see how that goes. I’ll add to this thread shortly

How did you define the pool? zpool history will show this if you don’t recall the exact command(s) used.

I’d have to check with the Engineering team where the FreeIPA conf file is stored but if it’s in the .system dataset then your pool failing to import would mean that dataset isn’t available, and as such FreeIPA has no idea what it’s doing any longer.

Quick note - how are you presenting your disks to the TrueNAS VM?

1 Like

They are zvols via proxmox. Yes, I know it’s not recommended. Yes, I know that’s not good for production. No, I’m not going to do that for production …

TL;DR: I just want to prove out that TrueNAS SCALE works in my env … I’m this close to buying hardware from ixSystems as I like to support folks who embrace open source.

Oops, I missed the first question in your thread. I’ve not nuked this VM yet so I’ll check to see if there are logs. Thanks!

24.04 is a no go on proxmox VE - the install script fails saying lsblk cannot find /dev/sda3, but it’s there. I’d dig into it more if it was GA but since 24.10 is I’ll move my efforts there!

History for ‘tank’:
2025-01-28.19:16:18 zpool create -m /mnt/tank -o ashift=12 -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O compression=lz4 -O normalization=formD -O relatime=off tank raidz /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3
2025-01-28.19:17:34 zfs set atime=off tank
2025-01-28.19:22:18 zfs receive -F tank
2025-01-28.19:23:08 zfs set mountpoint=/mnt/tank tank
2025-01-28.19:27:31 py-libzfs: zpool add tank
2025-01-28.19:36:54 zfs snapshot tank/.system/samba4@update–2025-01-29-03-36–SCALE-24.10.1
2025-01-29.06:34:19 py-libzfs: zpool import 4881533787974295068 tank
2025-01-29.06:34:19 py-libzfs: zfs inherit -r tank/userhome
2025-01-29.06:34:20 py-libzfs: zfs inherit -r tank/.system

I had tested a mirror, then wanted to convert to a raidz vdev … hence the zfs receive from a backup.

FWIW, the system worked fine after that - I rebooted, played around a bit, and then ran the upgrade package.

I’m not sure why the zfs receive reset the mountpoint to /tank.

In any case, I’ll definitely try to prefer the GUI to create and modify vdevs and pools.

Thanks for the updates. Proxmox specifically can have issues if it gets hold of the disks, because it speaks ZFS and it’s been known to attempt to mount the pools in the host at the same time as the guest - which results in corrupted data as soon as they collide on an LBA.

The GUI creates and sets a whole lot more feature flags as well as specifies mount points that TrueNAS and the Apps/services are expecting. I’m betting on the mount points being a significant factor.

zpool create -o feature@lz4_compress=enabled -o altroot=/mnt -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -o ashift12 -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@multi_vdev_crash_dump=enabled -o feature@spacemap_histogram=enabled -o feature@enabled_txg=enabled -o feature@hole_birth=enabled -o feature@extensible_dataset=enabled -o feature@embedded_data=enabled -o feature@bookmarks=enabled -o feature@filesystem_limits=enabled -o feature@large_blocks=enabled -o feature@large_dnode=enabled -o feature@sha512=enabled -o feature@skein=enabled -o feature@userobj_accounting=enabled -o feature@encryption=enabled -o feature@project_quota=enabled -o feature@device_removal=enabled -o feature@obsolete_counts=enabled -o feature@zpool_checkpoint=enabled -o feature@spacemap_v2=enabled -o feature@allocation_classes=enabled -o feature@resilver_defer=enabled -o feature@bookmark_v2=enabled -o feature@redaction_bookmarks=enabled -o feature@redacted_datasets=enabled -o feature@bookmark_written=enabled -o feature@log_spacemap=enabled -o feature@livelist=enabled -o feature@device_rebuild=enabled -o feature@zstd_compress=enabled -o feature@draid=enabled -O atime=off -O compression=lz4 -O aclinherit=passthrough -O mountpoint=/tank -O aclmode=passthrough tank	/dev/gptid/d00fa600-d1d5-11ef-b4e6-000c29ed785d
2025-01-13.09:42:52  zfs inherit tank
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G tank/.system/cores
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/samba4
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/syslog-388408e95929461db941ccc50979f8fb
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/rrd-388408e95929461db941ccc50979f8fb
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/configs-388408e95929461db941ccc50979f8fb
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/webui
2025-01-13.09:42:52  zfs create -o mountpoint=legacy -o readonly=off tank/.system/services
2025-01-13.09:42:55  zfs set acltype=off tank/.system

Can I ask what “complex configuration” isn’t capable through the webUI?

1 Like

I understand the hesitation with Proxmox … I do. However, from the Proxmox PoV they’re zvols. Proxmox doesn’t try to do anything with them other than to present them to the guest. I’m running a bunch of VMs with ZFS filesystems and I’ve not seen any issues, at all.

Regarding “complex configurations” … I was reading older forum posts and someone smarter than me made that assertion. Probably outdated info … and as I said I will use the GUI going forward since it will meet my home lab needs.

I will say that while I understand the focus on the ZFS issues, it’s not my major concern. The whole point of the VM was to establish whether TrueNAS scale will work with FreeIPA (it does), whether that configuration will survive an upgrade (it didn’t), and to play with the API via terraform (in progress).

If experts want to assert that the VM won’t upgrade correctly due to some underlying XFS issues then I’ll repurpose some hardware lying around to test!

I’m not here to crap on TrueNAS … I want this to work because I would prefer to spend my lab time on things that interest me and have utility for my job. Setting up a NAS by hand is not one of those things :slight_smile:

I do very much appreciate the feedback and help thus far. Many thanks @HoneyBadger !!

ZVOLs or QCOW virtual disks in Proxmox are safer from that perspective - although not recommended for different reasons. It’s when people do the whole-disk passthrough of local drives that we tend to see the “oops, it’s mounted twice” issue.

I’m confident that if you try again - VM or bare-metal - with a pool that’s created 100% from the GUI and has the dataset mount points configured as the middleware expects, then the FreeIPA config should persist correctly through upgrades. :slight_smile:

No offense taken, I didn’t take it as a slight against the product - just trying to figure out if there was some old information out there that needed updating! We’ve tried hard to make a wide variety of configurations possible without the headache of manual pool creation.

1 Like

Let me ask you this: if I do a brand new install of 24.10.1, join the FreeIPA domain, and then upgrade - all without even touching the vdev and pool configs … I should reboot into a working system, correct?

Should help with any, possible, Proxmox virtualization issues.

Virtualize TrueNAS

1 Like

Thank you for this. Unfortunately (fortunately?) my issues are not with virtualization.

My issues are

  • upgrade from 24.20.1 to 24.10.2 broke IPA integration, twice
  • Kerberos security doesn’t work at all (to be fair I only mentioned that in passing)
  • My CLI defined zpool was exported after my second attempt at an upgrade (lesson learned, don’t define zpools via the CLI!)

Ok, good news!

  • VM build 3, applied 24.20.2 updates prior to domain join.
  • Joined IPA domain using TrueNAS documented process.
  • Created zpool and datasets (I’m trying to set up kerberos protected home directories; IPA will inform clients via automount maps where to find them)
  • Shared individual userhomes via NFS
    • eventually I want to do this via the API; as new users are created their dataset is created and shared
  • Sharing via NFS (no auth) works from both debian and RHEL clients
  • Sharing via NFS + krb5/krb5i/krb5p works! From both debian and RHEL clients. I could never get this to work previously as RHEL clients refused to mount with sec=krb5{,i,p}, both with TrueNAS and plain old linux NFS servers
    • this did not work the previous two attempts. I blame the user.

For me, this is fantastic … I’m now ready to pursue API integration work while I wait for my soon to be ordered hardware to arrive.

Thanks to all who provided feedback!

2 Likes

Happy to hear it @IvoryNomad !

API integration work should become even easier in upcoming versions of TrueNAS as we’re adding a versioned API system - that way all of your hard work will be sure to continue functioning in a defined manner as we make our own improvements on the back end.

1 Like