NAS-135458 is the bug report for that particular issue. Thank you.
120 hddās in a single system? Sounds like a beast of a machine. Can you feed my curiosity with some pictures.
VMs have not been supported in our Enterprise version with HA.
That applies to both 13.x and Electric Eel⦠Community editions only.
The transition to Incus is step 1 of that Enterprise HA plan in Fangtooth
We are working on GoldEye being step 2.
VMs/Instances are marked as āexperimentalā because we understood software upgrades would not be smooth for VMs. However, we would like to understand where the complications are and then improve the process and documentation.
Migration from Eel on two boxes (just personal/homelab, nothing fancy) went pretty well overall. I use TrueNAS ācustom appsā to bootstrap Komodo, which manages the rest of my Docker containers itself, and everything came back up perfectly after upgrading.
Additionally, the Docker setup now has IPv6 support enabled, which is awesome. I had to recreate existing Docker networks (and down/up existing Compose stacks) to get them to pick up IPv6 subnets, but thatās standard Docker behavior, and Iām happy TrueNAS isnāt trying to do anything fancy on top of it.
VMs were a little bit more painful. I donāt have complex virtualization needs, so even the basic experimental Incus integration is good enough for me. I followed the instructions to migrate over existing zvols. Worked out okay, but I hit some gotchas (which maybe could just involve better documentation):
-
(Documentation) Not sure why, but my Debian VM initially failed to boot. I had to manually boot Debian (
FS0:\EFI\debian\grubx64.efi
in the UEFI shell) and then reinstall the bootloader (sudo grub-install && sudo update-grub
). After that, it booted up just fine. -
(Documentation) The new virtual network interface had a different device path in the guest, so I had to update
/etc/network/interfaces
via console over VNC to restore network access. (For future folks upgrading, installingincus-agent
before moving to Fanghorn might make sense, to enable the TrueNAS Web shell to work. But I didnāt know about that, so I had to make due with VNC.) -
(Documentation) The documentation on a custom bridge network vs. Incusās own bridge network is kind of confusing. I already had a bridge network (
br0
) created in TrueNAS that I used on Eel, but I noticed Incus still made its own bridge (incusbr0
) anyway. AFAICT, that one seems to NAT the guests behind a new private IP range, which isnāt what I wanted, so I kept my ownbr0
network⦠but I had to discover this experimentally. -
(Bug or feature request, depends on perspective) The UX around setting up Incus networking in the āInstancesā screens is⦠also really confusing, if Iām being honest. The menus are not intuitive, the device names shown (e.g., āeth0 (BRIDGED)ā) donāt correspond to either the host or the guest, and itās not clear how the per-instance settings relate to the global ones. Please just show the actual host device names (e.g.,
br0
,incusbr0
, etc.) explicitly in the UI rather than making folks guess at them. -
(Feature request) There doesnāt seem to be any way to set the MAC address of a containerās network adapter, nor even to see it. For me, that was a mild annoyance (having to update DHCP reservations), but I suspect not being able to change the MAC will be a blocker for others.
-
(Documentation) Small thing, but itād be nice to provide some guidance on disk type selection. I went with Virtio-SCSI since thatās what I used before in KVM, but since my Instances pool is backed by NVMe drives, would it be better to expose the zvols to guests as NVMe? No idea!
-
(Investigation) I need to test this more thoroughly, but I think passing through ādiscardā from the guest to the zvol might actually work now. At least
fstrim -av
in the guest seems to do something, whereas it didnāt in Eel. Iām really hopeful this means that periodically runningfstrim
(in guest) andzpool trim
(on the host) will ensure files deleted inside the VM eventually get TRIMed on the physical drives.
Once I worked through all that, though, IPv4 and IPv6 inside the guest seems to work fine. TrueNAS, VM, and Docker containers can all talk to one another without issue.
P.S. Thanks so much for swapping out SPICE for VNC. The browser-based SPICE client was super frustrating, and now I can just use any one of many desktop VNC clients that work way better.
Unfortunately rolled back to 24.10.2.1 because I couldnāt easily migrate my existing virtual systems into the new subsystem. It insists on having an image to boot from, and I was thinking I could also specify this as my root filesystem, but it doesnāt like this. And then it tries to boot from the empty volume. Iām sure I could force it to work, but it still feels a little too kludgy at the moment.
These VMs werenāt critical; theyāre just secondary kubernetes nodes for some hardware redundancy. I was thinking it would be nice to have them off the main Proxmox host so something stays up during maintenance. Iām a home lab - I guess it isnāt that important.
After reactivating the 24.10.2.1 environment, I was able to zfs rename the zvols to their original location and reboot the VMs. Iāll see about migrating them over to Proxmox at some point and then re-attempt the upgrade.
One question:
direct-io
is not enabled even though I setdirect=always
using CLI. I have done several experiments, and no direct traffic was observed.
Is this a bug or intended? Should I file a issue on jira for this?
If this is intended, can I workaround it somehow?
I know I can set web ui to bind only to the external IP, but I want to have webui available on localhost for local, non-internet facing reverse proxy.
Yeah, Iām not seeing that. I also donāt have apps
enabled and am using Incus aka Instances. Those are all your Docker networks.
netstat -lnp -t nat|grep -E '80|443'
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3592/nginx: master
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3592/nginx: master
tcp6 0 0 :::443 :::* LISTEN 3592/nginx: master
tcp6 0 0 :::80 :::* LISTEN 3592/nginx: master
You can go into: System > General
then adjust the Web Interface
settings to only listen on specific interfaces.
My bet is this is a fail-safe setting for out of the box until you get your TNCE box setup and configured, then you would scope it down to the management interface you want to use for the Web UI.
It would have been nice if you could add manual entities to that Web Interface field instead of choosing between broadcasting to everything or just hostās IP.
Nevertheless, there should be at least some warning, itās easy to overlook this security issue when setting up a public facing reverse proxy.
Thanks for the very constructive review⦠much appreciated.
To action them, we need to distinguish between:
Bugfixes - Report a bug
Improvements/features - Feature Requests
Documentation issues - documentation feedback.
Could you suggest where each issues belongs⦠perhaps just add some text to original post.
Many thanks
DirectIO was disabled for Fangtooth because it conflicted with Nvidia drivers. It is planned to be fixed in Goldeye
This is a complex topic. Iād suggest you start a thread in General and we start a discussion on best methods there. Please add a link to that thread in your post.
At least scope it to system interfaces and ignore docker interfaces by default.
Wouldnāt not listening on Docker interfaces prevent reverse proxies running in Docker from proxying TrueNAS? (Thatās how I get HTTPS for my TrueNAS servers, since Iāve got the reverse proxies running anyway, and itās easier to have all my cert acquisition in one place.)
Iām referencing the Web Interface only listening on system interfaces, not Docker, Incus, etc.
Yes, sorry. What Iām saying is that if the TrueNAS Web UI didnāt listen on Docker interfaces, then it wouldnāt that cause issues for Docker reverse proxies that should reach the Web interface?
For example, I run a Caddy version proxy on ports 80 and 443. I moved the TrueNAS Web UI āout of the wayā to ports 8080 and 8443, and I configured Caddy to proxy my TrueNAS subdomain to host.docker.internal:8080
. But, unless Iām just misunderstanding you, that wouldnāt work with the setup youāre proposing, since TrueNAS would no longer be listening on the relevant Docker interface.
Sure! I was hoping it came across more constructive than complaining; glad it did.
I edited my original post to try and categorize things as you suggested. (I donāt think anything I found is actually a bug, other than maybe the UI for editing instance NICs being hard to understand. If others are confused too, maybe that reaches the ābugā threshold.)
I think if you setup a bridge interface, you shouldnāt have issues.
This problem still exists in 25.04, and the hard disk temperature indicator is not displayed on all pages.