Yeah, sorry about the title, just needed a heavy dose of sarcasm to cope with the situation I had put myself in.
Thanks for the reassurance not all hope is lost, I did save the config beforehand so maybe I’ll make an attempt at reverting, just overwhelmingly busy trying to get projects wrapped up before going on vacation so most likely won’t have time to even start to look at it for a bit.
In the meantime, are you aware of how I might be able to pull off the old qcow images? I might just launch them on my PC to get things like jellyfin watch status DB’s backed up and spin up new VM’s.
Would need a similar approach to access data within the VM’s I attempted to convert into the new Incus system because I foolishly “moved” instead of “cloned” them though. Attempted some ZFS cloning datasets and mounting them last week but haven’t had a chance to revisit it, being Incus I’m not sure if similar options are available like what exists for qcow.
Hi @clean – I would try these in order:
-
Boot from the pre-upgrade boot environment and see if just doing that works. See: Boot Pool Management | TrueNAS Documentation Hub
-
Restore the pre-upgrade configuration backup in the pre-upgrade version. See: Managing the System Configuration | TrueNAS Documentation Hub
After either (1) or (2) your data should be intact and accessible. Upgrades themselves do not change your data (as long as you don’t zpool upgrade until you are happy with the system, as it will prevent downgrading to a previous version). If the pool is not mounted you can check under Storage and then Import Pool.
I would turn off or extend the lifetime on any automatic snapshots. You don’t want the snapshots disappearing before you have had a chance to check everything. After checking or rolling back each dataset to a point you are happy with, I would make new snapshot @checked so you know which ones you have checked and don’t need to worry that the @checked snapshots will be automatically expired.
For your VMs, I don’t use VMs anymore (migrated everything to containers) but I saw @dan’s comment above “this is about the Incus switch destroying your VMs”. I’m assuming Incus tried to do something smart with the VMs that ended up causing more problems. If the VMs have been damaged but you have snapshots of the VMs before the upgrade then you should be able to rollback the snapshot to before the upgrade and then run under the previous TrueNAS virtualisation. Maybe try with one VM first.
You said you use Jellyfin. Anything else? Is your media stored inside the Jellyfin VM or on a dataset?
Jellyfin is available as a container, and it runs well as a container from the TrueNAS Apps catalog. This is a really good opportunity to move to containers if you can. If your media is stored on datasets, you can spin up a new Jellyfin and point it to your media library. If you mount the datasets in the same place as they were mounted on your VMs then container’s library paths should match the old VM’s library paths, and if you were able to get the database out of the VM and copy it into the container then hopefully the new container will look just like the old VM, with a lot less hassle going forward.
You can passthrough GPU to a container. I don’t do this but there are instructions here: Apps | TrueNAS Documentation Hub (search for GPU).
It is good practice to make your own datasets for containers rather than using the default ones that TrueNAS Apps offers to create. Create a dataset pool/apps with template Apps, and then make datasets underneath as required for the apps, for example for Jellyfin:
pool/apps/jellyfin/cache
pool/apps/jellyfin/config
Make cache and config owned by the user/group that Jellyfin will run as (you a leave the default or create a separate user). Make sure these datasets are also covered by your snapshot policy.
Ideally you will be able to move fully to containers, but if not, once you are back up and running on 24.10 this sounds like it will help you upgrade forward again once 25.04.2 is released:
Thank you so much for the detailed post. For some idiotic reason I completely blanked on being able to restore previous versions with the boot pool management (I did this to go back to 24.04 when 24.10 wasn’t handling my GPU passthru properly). I just booted back to my known good 24.04 setup and (almost) everything is back the way it should be! The one kick in the pants is since I had moved the VM datasets for 2 of my VM’s to the Incus system, they’re not going to load in the old environment. but they share a similar setup to my Jellyfin VM so I can just copy the configs I need out of that and rebuild the others (Navidrome and Sabnzbd).
I would like to move to just running everything in containers but I don’t think TrueNAS has the capability to support the networking config I want with these. Jellyfin and Navidrome are using Wireguard to connect to a VPS that’s running Nginx so I can access those remotely without opening any ports to my home network. Sabnazbd is connected to a different VPN so it can fetch the data it needs without correlating to anything else. If TrueNAS can handle that granular of traffic management just through containerization though, please let me know because that would be great. Even if not, the boot pool suggestion was extremely helpful and at least gets me to a point where I can see a path forward without starting from scratch. I ignorantly did not have snapshots of the VM’s to restore…maybe snapshots should have some type of default config to combat irresponsible users such as myself? lol
I am glad you are mostly back up and running. I hope you have snapshots enabled now! Maybe you could do a feature request post to (1) prompt the user to setup an automatic snapshot job when creating a zpool (2) alarm on datasets with no snapshot jobs unless they are marked with a “I want to lose my data” tag.
You can do what you need for Sabnzbd with custom containers on TrueNAS. In the Docker compose config you can set the network for one container to point to another container eg. running VPN. There are lots of tutorials on setting up Docker to do what you want with Sabnzbd (deploy yourself) and WG Easy (in the TrueNAS apps catalog, or do yourself) and it is easy to deploy your own containers.
For the rest that you just want to be remotely accessible, TrueNAS apps catalog has Jellyfin and Navidrome (or you can deploy yourself), and WG Easy for Wireguard, or many people here use Tailscale which is Wireguard with some magic. I access Plex, Jellyfin, Immich TrueNAS apps and a custom Kerberos container, along with the TrueNAS admin GUI/ssh and SMB and NFS shares, all remotely with no port forwarding at all (even for the VPN). A persistent NAT mapping on the Tailscale Wireguard port for “any remote host” will help if you don’t want to port forward Tailscale.
Jellyfin, Navidrome and Tailscale (or I imagine WG Easy if you want to continue as you are for networking, but I haven’t tried it myself) are very easy to get going from the catalog. A web search or your favourite AI will give you the Docker compose for the rest. You can deploy 2 of the same TrueNAS app (eg. WG Easy) if you need and you can deploy pretty much any valid docker config with Docker compose.
Happy to try to help if you need but I think you will find it easier than you expect. Docker support in TrueNAS is game-changing.
Nice! I’ve done (limited) work with Docker for things in the past so I don’t mind going deeper in that, but I had no idea I could point one docker container to another with the goal of using that container’s network configuration. I’ll look into this more since it sounds like I could get away with my same setup but fully containerized instead.
I’ll search on my own, but do you know off the top of your head if HW accel works on Intel Arc cards (Alchemist, not Battlemage) for the Jellyfin container? I believe it has support in the kernel that TrueNAS 25.04 is running, but there was issues with that in previous versions and was another reason I went with the VM route previously. I think that would be my only blocker at this point.
I would guess it is possible on the basis that I can see a lot of blogs & GitHub’s for AI on Intel Arc in containers.
Just correcting something earlier, you will want to look at Gluetun to do the networking for sabnzbd container. There are plenty of how-tos.