Luckily, I’m still testing Truenas scale prior to purchasing hardware.
Unluckily, I’ve seen issues twice when upgrading from 24.10.1 to 24.10.2.
The first time, Truenas device was unable to talk to my FreeIPA domain after the upgrade. I had to remove it from the domain and rejoin. Once that happened KRB security on NFS shares no longer worked. Home lab, so no big deal …
Started over. Thought about upgrading first but did the domain join on 24.10.1 again to see if I could replicate the issue. Indeed I could! This time, after the upgrade reboot not only is the domain broken, but my storage pool was offline. I wasn’t shocked as I had defined this pool via the CLI and then imported it … but as I understand it this is allowed for complex pool configurations.
Here’s the error:
IPA default_config file is missing. This may indicate that an administrator has enabled the IPA service through unsupported methods. Rejoining the IPA domain may be required…
That is a blatant falsehood; the IPA domain join was performed per these instructions: [ link to Truenas docs redacted … seriously?!? ]
I’ll be nuking this VM shortly and trying again. This time I’ll apply the upgrade package first and then perform the domain join.
Long story short, I am concerned about promoting this system to be my production (albeit home) NAS if I’m going to have to perform major work every time there’s a patch. Please tell me this is not usual behavior?
How did you define the pool? zpool history will show this if you don’t recall the exact command(s) used.
I’d have to check with the Engineering team where the FreeIPA conf file is stored but if it’s in the .system dataset then your pool failing to import would mean that dataset isn’t available, and as such FreeIPA has no idea what it’s doing any longer.
Quick note - how are you presenting your disks to the TrueNAS VM?
They are zvols via proxmox. Yes, I know it’s not recommended. Yes, I know that’s not good for production. No, I’m not going to do that for production …
TL;DR: I just want to prove out that TrueNAS SCALE works in my env … I’m this close to buying hardware from ixSystems as I like to support folks who embrace open source.
24.04 is a no go on proxmox VE - the install script fails saying lsblk cannot find /dev/sda3, but it’s there. I’d dig into it more if it was GA but since 24.10 is I’ll move my efforts there!
Thanks for the updates. Proxmox specifically can have issues if it gets hold of the disks, because it speaks ZFS and it’s been known to attempt to mount the pools in the host at the same time as the guest - which results in corrupted data as soon as they collide on an LBA.
The GUI creates and sets a whole lot more feature flags as well as specifies mount points that TrueNAS and the Apps/services are expecting. I’m betting on the mount points being a significant factor.
I understand the hesitation with Proxmox … I do. However, from the Proxmox PoV they’re zvols. Proxmox doesn’t try to do anything with them other than to present them to the guest. I’m running a bunch of VMs with ZFS filesystems and I’ve not seen any issues, at all.
Regarding “complex configurations” … I was reading older forum posts and someone smarter than me made that assertion. Probably outdated info … and as I said I will use the GUI going forward since it will meet my home lab needs.
I will say that while I understand the focus on the ZFS issues, it’s not my major concern. The whole point of the VM was to establish whether TrueNAS scale will work with FreeIPA (it does), whether that configuration will survive an upgrade (it didn’t), and to play with the API via terraform (in progress).
If experts want to assert that the VM won’t upgrade correctly due to some underlying XFS issues then I’ll repurpose some hardware lying around to test!
I’m not here to crap on TrueNAS … I want this to work because I would prefer to spend my lab time on things that interest me and have utility for my job. Setting up a NAS by hand is not one of those things
I do very much appreciate the feedback and help thus far. Many thanks @HoneyBadger !!
ZVOLs or QCOW virtual disks in Proxmox are safer from that perspective - although not recommended for different reasons. It’s when people do the whole-disk passthrough of local drives that we tend to see the “oops, it’s mounted twice” issue.
I’m confident that if you try again - VM or bare-metal - with a pool that’s created 100% from the GUI and has the dataset mount points configured as the middleware expects, then the FreeIPA config should persist correctly through upgrades.
No offense taken, I didn’t take it as a slight against the product - just trying to figure out if there was some old information out there that needed updating! We’ve tried hard to make a wide variety of configurations possible without the headache of manual pool creation.
Let me ask you this: if I do a brand new install of 24.10.1, join the FreeIPA domain, and then upgrade - all without even touching the vdev and pool configs … I should reboot into a working system, correct?
VM build 3, applied 24.20.2 updates prior to domain join.
Joined IPA domain using TrueNAS documented process.
Created zpool and datasets (I’m trying to set up kerberos protected home directories; IPA will inform clients via automount maps where to find them)
Shared individual userhomes via NFS
eventually I want to do this via the API; as new users are created their dataset is created and shared
Sharing via NFS (no auth) works from both debian and RHEL clients
Sharing via NFS + krb5/krb5i/krb5p works! From both debian and RHEL clients. I could never get this to work previously as RHEL clients refused to mount with sec=krb5{,i,p}, both with TrueNAS and plain old linux NFS servers
this did not work the previous two attempts. I blame the user.
For me, this is fantastic … I’m now ready to pursue API integration work while I wait for my soon to be ordered hardware to arrive.
API integration work should become even easier in upcoming versions of TrueNAS as we’re adding a versioned API system - that way all of your hard work will be sure to continue functioning in a defined manner as we make our own improvements on the back end.