Uneventful direct upgrade from 24.04-RC1 to Release.
This is not an officially supported upgrade path I believe, but if it worked it would save a lot of time and bother.
Seems to have worked fine for me.
Uneventful direct upgrade from 24.04-RC1 to Release.
This is not an officially supported upgrade path I believe, but if it worked it would save a lot of time and bother.
Seems to have worked fine for me.
So far the least painful update Iāve ever had. Nothing is obviously broken after ~3 hours of validations. 9/10 (because you can never be too sure)
EDIT: Lol - you see? This is why there is never a 10/10 score.
For everyoneās information, thereās been over 5,000 updates in the first 2 days.
If you have had an issue, report it here, start a new thread and add a link to the thread. Weāll be triaging and looking for common issues.
Upgraded from the latest cobia, on rebooting my apps service didnāt come back online. But another reboot fixed it, so all good.
update from RC1 to this. Didnāt notice any specific issue. Maybe fewer alerts?
I only get 1 alert now only which is demanding me not to use usb for boot drive
After more than a decade You guys still did not figure out You need to use GPT Labels to create Your boot-pool. NEVER use partition names. This is bad practice.
use GPT, GEOM, UUID labels to create Your pools. Same time I notice, when You select several disks at installation, You can NOT select the type of ZFS structure you want. Some people want striped mirrors ~ raid 10 in ZFS. Others want ZFS1. Right now the installation makes a mirror of all disks you do select. NOT DONE.
It has to be:
Blockquote
blkid
lsblk --output NAME,FSTYPE,LABEL,UUID,MODE
zpool add boot-pool mirror /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
If you want to suggest improvements, we insist that you do so professionally and politely.
Only mirrors are supported because thereās no value to RAIDZ, never mind DRAID, for a boot pool. Donāt expect this to change.
Obviously it is done. But why do you care? Seriously, why would anyone want the boot pool in striped mirrors or RAIDZn?
No, you canāt. Youāve never been able to. And in the ten years or so since 9.3 introduced the ZFS boot pool, I canāt recall anyone else complaining about this.
And, if itās done right, it mirrors the boot stuffs and mirrors the swap too.
It is a fair criticism, however, that TrueNAS Core and SCALE still use the kernel identifier (sda2, sdb2, sdc2) for the boot-pool.
Not sure why they havenāt adopted to use the GPTID or PARTUUID for the boot-pool after all these years?
Donāt know about the details these days, but a few years ago the boot process involved more or less blindly running zpool import freenas-boot
and hoping for the best. See for instance that time I ended up with a FreeNAS 9.10 (FreeBSD 10) kernel running the userland from FreeNAS 9.3.
It is hard to stay polite if developers after more than 10 years donāt know you need to use GPT/GEOM labels and/or UUID to setup a zpool, because when a disk fails, the system can rearrange the disks, so you end up with massive problems, data corruption and even a total loss of the pool.
So every zfs pool needs to be created properly and one of the best ways IS with the UUID and to adapt the installation script properly.
This command returns in linux the UUID for sda1
Blockquote
lsblk -n /dev/sda1 -o UUID
We do expect this to change because it is bad practice to use the partition names
assigned by linux such as sda1, etc to create pools to not run into problems when there is a failure.
Besides no excuses here, XigmaNas (FreeBSD) or even Proxmox uses also labels for this reason I do explain. The latter does use GPT labels.
This is for FreeBSD however the same rules applies for Debian Linux.
ref:
The question why people would run things on ZFS RAID-Z1 or a striped mirror (~raid 10) in ZFS is a matter of taste and extra security in some cases, but I do agree in most cases a double or tripple dev mirror is fine
I will have the team take a look at this. But I suspect its what Dan mentioned, if it aināt broke donāt fix it. Weāve had pretty much zero reason to go change this so far in terms of real issues being caused by it. However we can at least review and see if its time to modernize that soon. Installer is getting a lot of love in 24.10 anyway, so the timing is good.
Are you referring to the device names being used to construct the boot-pool? or the feature of allowing more devices beyond the 2-drive mirror vdev for the boot-pool?
I just upgraded last night, everything went smoothly. My only issue at the moment is that each page in the web UI takes an absurdly long time to load . The dashboard takes >30 seconds, and the same for data sets, storage, apps ect.
Device names for boot-pool is what I was referring to. But the request for anything beyond a mirrored on boot is pretty surprising, all these years and Iāve not seen any real demand around that.
Device names for boot-pool is what I was referring to.
Thatās the more important thing, which I think is long due for TrueNAS.
I just upgraded last night, everything went smoothly. My only issue at the moment is that each page in the web UI takes an absurdly long time to load . The dashboard takes >30 seconds, and the same for data sets, storage, apps ect.
That is surprising, could use a bug ticket or some further telemetry on that to even begin to guess at what the cause could be. Whats your setup like? This isnāt a system with 1500+ Apps and 30 replication jobs running all at once is it?
It is hard to stay polite
Perhaps you should try.
Itās not that you donāt have a point, but it also isnāt like youāve been complaining about this for the past ten years. Or even for the past one year. Or at all, really, until joining here and setting phasers to VAPORIZE. Iām pretty sureāand Iāve been following the forums pretty closely since well before the release of 9.3āthat nobody has ever made an issue of this in the past.
data corruption and even a total loss of the pool.
So what? Do a clean install, upload a saved config file, and Robert is your fatherās brother. And thatās probably a big part of why nobodyās made an issue of this before: because the impact of a failed boot pool is so minimal.
Besides no excuses here, XigmaNas (FreeBSD) or even Proxmox uses also labels for this reason I do explain. The latter does use GPT labels.
I havenāt used XigmaNAS, but youāre incorrect with respect to Proxmox:
root@pve1 ā ~ zpool status
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:04:20 with 0 errors on Sun Apr 14 00:28:21 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
Thatās the pool created by the Proxmox installer, not one Iāve manually created or modified. Where Iāve had to replace disks, Iāve used /dev/disk/by-id/
identifiers, but the Proxmox installer builds the boot pool with sda/sdb/etc.
Edit: Well, the Proxmox installer built the boot pool with sda/sdb/whatever. By 8.2, it is using labels, though still not GPT labels:
So yes, valid point, but dial back the rhetoric a bit. If literally nobody has complained about this in ten years, it canāt be that big of a deal.
It will go broken when there are disks down or the controller goes woot. Because they do re-arrange. The current script works that long until a disk or controller has issues.