TrueNAS 25.04.0 now available!

NAS-135458 is the bug report for that particular issue. Thank you. :slight_smile:

120 hdd’s in a single system? Sounds like a beast of a machine. Can you feed my curiosity with some pictures. :yum:

VMs have not been supported in our Enterprise version with HA.
That applies to both 13.x and Electric Eel… Community editions only.
The transition to Incus is step 1 of that Enterprise HA plan in Fangtooth
We are working on GoldEye being step 2.

VMs/Instances are marked as ā€œexperimentalā€ because we understood software upgrades would not be smooth for VMs. However, we would like to understand where the complications are and then improve the process and documentation.

1 Like

Migration from Eel on two boxes (just personal/homelab, nothing fancy) went pretty well overall. I use TrueNAS ā€œcustom appsā€ to bootstrap Komodo, which manages the rest of my Docker containers itself, and everything came back up perfectly after upgrading.

Additionally, the Docker setup now has IPv6 support enabled, which is awesome. I had to recreate existing Docker networks (and down/up existing Compose stacks) to get them to pick up IPv6 subnets, but that’s standard Docker behavior, and I’m happy TrueNAS isn’t trying to do anything fancy on top of it. :slight_smile:

VMs were a little bit more painful. I don’t have complex virtualization needs, so even the basic experimental Incus integration is good enough for me. I followed the instructions to migrate over existing zvols. Worked out okay, but I hit some gotchas (which maybe could just involve better documentation):

  • (Documentation) Not sure why, but my Debian VM initially failed to boot. I had to manually boot Debian (FS0:\EFI\debian\grubx64.efi in the UEFI shell) and then reinstall the bootloader (sudo grub-install && sudo update-grub). After that, it booted up just fine.

  • (Documentation) The new virtual network interface had a different device path in the guest, so I had to update /etc/network/interfaces via console over VNC to restore network access. (For future folks upgrading, installing incus-agent before moving to Fanghorn might make sense, to enable the TrueNAS Web shell to work. But I didn’t know about that, so I had to make due with VNC.)

  • (Documentation) The documentation on a custom bridge network vs. Incus’s own bridge network is kind of confusing. I already had a bridge network (br0) created in TrueNAS that I used on Eel, but I noticed Incus still made its own bridge (incusbr0) anyway. AFAICT, that one seems to NAT the guests behind a new private IP range, which isn’t what I wanted, so I kept my own br0 network… but I had to discover this experimentally.

  • (Bug or feature request, depends on perspective) The UX around setting up Incus networking in the ā€œInstancesā€ screens is… also really confusing, if I’m being honest. The menus are not intuitive, the device names shown (e.g., ā€œeth0 (BRIDGED)ā€) don’t correspond to either the host or the guest, and it’s not clear how the per-instance settings relate to the global ones. Please just show the actual host device names (e.g., br0, incusbr0, etc.) explicitly in the UI rather than making folks guess at them. :slight_smile:

  • (Feature request) There doesn’t seem to be any way to set the MAC address of a container’s network adapter, nor even to see it. For me, that was a mild annoyance (having to update DHCP reservations), but I suspect not being able to change the MAC will be a blocker for others.

  • (Documentation) Small thing, but it’d be nice to provide some guidance on disk type selection. I went with Virtio-SCSI since that’s what I used before in KVM, but since my Instances pool is backed by NVMe drives, would it be better to expose the zvols to guests as NVMe? No idea!

  • (Investigation) I need to test this more thoroughly, but I think passing through ā€œdiscardā€ from the guest to the zvol might actually work now. At least fstrim -av in the guest seems to do something, whereas it didn’t in Eel. I’m really hopeful this means that periodically running fstrim (in guest) and zpool trim (on the host) will ensure files deleted inside the VM eventually get TRIMed on the physical drives.

Once I worked through all that, though, IPv4 and IPv6 inside the guest seems to work fine. TrueNAS, VM, and Docker containers can all talk to one another without issue.

P.S. Thanks so much for swapping out SPICE for VNC. The browser-based SPICE client was super frustrating, and now I can just use any one of many desktop VNC clients that work way better. :smiley:

4 Likes

Unfortunately rolled back to 24.10.2.1 because I couldn’t easily migrate my existing virtual systems into the new subsystem. It insists on having an image to boot from, and I was thinking I could also specify this as my root filesystem, but it doesn’t like this. And then it tries to boot from the empty volume. I’m sure I could force it to work, but it still feels a little too kludgy at the moment.

These VMs weren’t critical; they’re just secondary kubernetes nodes for some hardware redundancy. I was thinking it would be nice to have them off the main Proxmox host so something stays up during maintenance. I’m a home lab - I guess it isn’t that important. :smiley:

After reactivating the 24.10.2.1 environment, I was able to zfs rename the zvols to their original location and reboot the VMs. I’ll see about migrating them over to Proxmox at some point and then re-attempt the upgrade.

One question:

  • direct-io is not enabled even though I set direct=always using CLI. I have done several experiments, and no direct traffic was observed.

Is this a bug or intended? Should I file a issue on jira for this?
If this is intended, can I workaround it somehow?
I know I can set web ui to bind only to the external IP, but I want to have webui available on localhost for local, non-internet facing reverse proxy.

Yeah, I’m not seeing that. I also don’t have apps enabled and am using Incus aka Instances. Those are all your Docker networks.

netstat -lnp -t nat|grep -E '80|443'
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      3592/nginx: master  
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      3592/nginx: master  
tcp6       0      0 :::443                  :::*                    LISTEN      3592/nginx: master  
tcp6       0      0 :::80                   :::*                    LISTEN      3592/nginx: master

You can go into: System > General then adjust the Web Interface settings to only listen on specific interfaces.

My bet is this is a fail-safe setting for out of the box until you get your TNCE box setup and configured, then you would scope it down to the management interface you want to use for the Web UI.

It would have been nice if you could add manual entities to that Web Interface field instead of choosing between broadcasting to everything or just host’s IP.
Nevertheless, there should be at least some warning, it’s easy to overlook this security issue when setting up a public facing reverse proxy.

Thanks for the very constructive review… much appreciated.

To action them, we need to distinguish between:

Bugfixes - Report a bug
Improvements/features - Feature Requests
Documentation issues - documentation feedback.

Could you suggest where each issues belongs… perhaps just add some text to original post.

Many thanks

1 Like

DirectIO was disabled for Fangtooth because it conflicted with Nvidia drivers. It is planned to be fixed in Goldeye

This is a complex topic. I’d suggest you start a thread in General and we start a discussion on best methods there. Please add a link to that thread in your post.

At least scope it to system interfaces and ignore docker interfaces by default.

1 Like

Wouldn’t not listening on Docker interfaces prevent reverse proxies running in Docker from proxying TrueNAS? (That’s how I get HTTPS for my TrueNAS servers, since I’ve got the reverse proxies running anyway, and it’s easier to have all my cert acquisition in one place.)

I’m referencing the Web Interface only listening on system interfaces, not Docker, Incus, etc.

Yes, sorry. What I’m saying is that if the TrueNAS Web UI didn’t listen on Docker interfaces, then it wouldn’t that cause issues for Docker reverse proxies that should reach the Web interface?

For example, I run a Caddy version proxy on ports 80 and 443. I moved the TrueNAS Web UI ā€œout of the wayā€ to ports 8080 and 8443, and I configured Caddy to proxy my TrueNAS subdomain to host.docker.internal:8080. But, unless I’m just misunderstanding you, that wouldn’t work with the setup you’re proposing, since TrueNAS would no longer be listening on the relevant Docker interface.

Sure! I was hoping it came across more constructive than complaining; glad it did. :slight_smile:

I edited my original post to try and categorize things as you suggested. (I don’t think anything I found is actually a bug, other than maybe the UI for editing instance NICs being hard to understand. If others are confused too, maybe that reaches the ā€œbugā€ threshold.)

I think if you setup a bridge interface, you shouldn’t have issues.

This problem still exists in 25.04, and the hard disk temperature indicator is not displayed on all pages.

2 Likes

I note that under 25.04, I’m finally seeing more than half of my RAM used for ARC, something I never saw under 24.04 or 24.10:

1 Like