Older jails run perfectly well on a newer kernel. Again the breakage is not caused by FreeBSD. If you don’t keep your kernel on a supported release - that’s something of concern IMHO.
I guess I don’t understand clearly the lines between FreeBSD and TrueNAS.
And I really tried this year to keep my kernel updated, but it was not possible to update until 3 weeks ago, so I literally didn’t have time to get to 13.3 yet (yes, I was on holidays and then school restarted). It’s a very short timeframe. So, I failed upgrading my kernel. I’ll do that ASAP but only after a backup.
iX uses FreeBSD to build an appliance named TrueNAS CORE. You cannot do any FreeBSD updates yourself. All updates must come from iX. If they simply don’t keep track with FreeBSD supported versions then one day you will lose the ability to install packages in your jails.
If you update the jails to a supported FreeBSD version on the old EOL kernel sometimes you are lucky and they continue to run, sometimes they don’t.
That’s it in a nutshell.
Why they take one year after EOL to update from 13.1 to 13.3 - and skip 13.2 altogether - I don’t know and honestly don’t understand.
I update >1000 customer appliances running our “proServer” product every single month. We follow FreeBSD 13.3 (and will follow 13.4, 13.5, …) and the quarterly ports branch, build our images and roll out.
We have a staged “patch day” - test/staging systems every second Tuesday, production systems every fourth Tuesday. So customers have enough lead to notice any possible regressions with their software.
As far as I know, the dude who lost his pool was running nightlies. Maybe it’s not the same thread.
That being said, you should totally have backup if your pool is so critical.
Generally however the process is pretty painless, jails excluding.
The thread I’m talking about is: Pool Unavailable after upgrade from CORE 13 to SCALE 24
It seems like they were using just a normal truenas 13 and upgraded to 24.
I do have backups, my concern is once the main pools fail, I would be terrified trying to connect the backups to any system for concern of losing them.
I guess I could clone the backup disk to the spare and try to load that in truenas SCALE, if really everything goes that wrong.
Why would your risk losing them?
I did not previously see the thread you linked, but it seem pretty evident it’s a user error; also, RAIDZ1.
Switching to SCALE is safe if done properly, but we are digressing.
I am in a somewhat similar situation. I am considering migrating to 24.10 and moving my jails to Docker. I just don’t know if I should wait until 25.04 instead and consider LXC. Thoughts?
While I use my system for storage, I also have a handful of jails that are important to me (Unifi controller, Plex, etc.)
I am reluctant to move to 13.3 and will probably wait until next spring to figure out if I want to move to 13.3 or the current release of Scale.
Wait and see, there is no rush. CORE is not going to disappear for a long while.
There is no rational answer. Once you lose an hard drive, the heart starts pounding, even if you have a backup of it.
In the end, I just care about my data.
And good point about RAIDZ1, I’m very happy that all my pools are in the most stupid setup: 1 HDD + 1 mirror.
A SCALE sidegrade should be on the bottom of your priority list.
I disagree, I explained my reasoning for my pool structure today in another thread: What's the safest way to transition from CORE to SCALE without losing data stored on pools? - #5 by Fire-Dragon-DoL
Also my pool is borderline readonly, I don’t really care about write speed, I care the most about simplicity to manage and simplicity to backup, with “low stress” being at my top priority.
I was well aware that it would cost me 2x (literally for every hdd, I have to buy another one), but I’d rather have 1 HDD + 1 mirror + 1 backup (yes I spend 3x for every HDD, essentially) than lose the data. Some data is also backed up in the cloud.
My CORE instance has broken jails right now, so going to 13.3 should address that.
However what happens when FreeBSD 13.3 also reaches end of life? My jails might all break and that would be a permanent “downgrade” of what TrueNAS does for me, that’s why SCALE is on my radar.
I’m also way more experienced with Debian than FreeBSD.
Honestly, the fact that the guy with migration problems had RAIDZ1 makes me even more adamant about my choice of going with plain mirroring.
Totally sounds like you striped a single drive VDEV with a mirror VDEV, which is bad.
If you meant a hotspare + a mirror VDEV that’s another story.
I think I’m just explaining myself poorly.
I have one pool (MARS), with a single vdev called mirror-0, which has 2 disks (one the mirror of the other):
MARS 2.72T 1.96T 781G - - 35% 71% 1.00x ONLINE /mnt
mirror-0 2.72T 1.96T 781G - - 35% 71.9% - ONLINE
gptid/3beaf6ce-3289-11e9-b136-002590ec5b60 2.73T - - - - - - - ONLINE
gptid/5817bff3-7a68-11ed-898d-002590ec5b60 2.73T - - - - - - - ONLINE
Then I have BMARS, which is a pool made of a single disk and I just run a replication task from MARS to BMARS, then export BMARS and unplug the disk.
I also have a spare disk around (but not a hot spare, just a disk that’s unused) that I keep for emergencies.
The other pools, VENUS and JUPITER have the exact same configuration
Upgrade to 13.3 buying you enough time to explore the consequences of a sidegrade to SCALE. I’d wait for Electric Eel before making the jump. There are big changes in the “app” system from Dragonfish to EE and apps/jails seem to be your main concern. So I recommend letting the dust settle first.
That’s exactly my plan as long as 13.3 fixes my issues with jails.
I’m extremely familiar with docker and docker compose, since I use those extensively at work, which would greatly simplify my work in running applications.
Upgrade to 13.3 was smooth, I did have to perform a manual update though.
I will check if this solves the jail problem
Yeah, it’s just a tad different from a single drive striped to a mirror VDEV
The manual update is by design (iX doesn’t want enterprise customers to update yet).
Upgrade went smooth, the script I wrote for ansible was almost perfect (I forgot to auto set cpu=all), so I was able to recreate the affected jail from scratch and get it working immediately.
My best guess to what happened is that I upgraded one of the packages without upgrading the jail version and that caused some incompatibility and broke the jail functionality.
I see, you thought I had a single zpool with 2 vdevs and each vdev mirror of the other, so one disk dying would kill the vdev, but I don’t think it would kill the pool.
Why that setup would be bad, out of curiosity?
Is it because you are “locked” in that setup, while within a single vdev you can add/remove as many mirrors as you want even after the fact?
You got that the wrong way round. Redundancy or the lack thereof is configured at the VDEV level.
All (data) VDEVs of a pool are always striped.
Each VDEV can be a single drive, an N-way mirror, or a RAIDZn.
So you can have two mirrored disks in one VDEV, but you cannot mirror two VDEVs.
Thank you.
Unfortunately I study this thing when I have to set it up, then I don’t touch it for years (6!) and it works.
I still like my choice due to how simple it is (every disk has a mirror). It’s worth considering raidz, but I’m not going to buy more disks, I have plenty of storage right now