TrueNAS SCALE 24.04.0 Now Released!

Uneventful direct upgrade from 24.04-RC1 to Release.

This is not an officially supported upgrade path I believe, but if it worked it would save a lot of time and bother.

Seems to have worked fine for me.

1 Like

So far the least painful update Iā€™ve ever had. Nothing is obviously broken after ~3 hours of validations. 9/10 (because you can never be too sure)

3 Likes

For everyoneā€™s information, thereā€™s been over 5,000 updates in the first 2 days.

If you have had an issue, report it here, start a new thread and add a link to the thread. Weā€™ll be triaging and looking for common issues.

1 Like

Upgraded from the latest cobia, on rebooting my apps service didnā€™t come back online. But another reboot fixed it, so all good.

update from RC1 to this. Didnā€™t notice any specific issue. Maybe fewer alerts?

I only get 1 alert now only which is demanding me not to use usb for boot drive :smiling_face_with_tear:

After more than a decade You guys still did not figure out You need to use GPT Labels to create Your boot-pool. NEVER use partition names. This is bad practice.
use GPT, GEOM, UUID labels to create Your pools. Same time I notice, when You select several disks at installation, You can NOT select the type of ZFS structure you want. Some people want striped mirrors ~ raid 10 in ZFS. Others want ZFS1. Right now the installation makes a mirror of all disks you do select. NOT DONE.

It has to be:

Blockquote
blkid
lsblk --output NAME,FSTYPE,LABEL,UUID,MODE
zpool add boot-pool mirror /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

If you want to suggest improvements, we insist that you do so professionally and politely.

Only mirrors are supported because thereā€™s no value to RAIDZ, never mind DRAID, for a boot pool. Donā€™t expect this to change.

5 Likes

Obviously it is done. But why do you care? Seriously, why would anyone want the boot pool in striped mirrors or RAIDZn?

No, you canā€™t. Youā€™ve never been able to. And in the ten years or so since 9.3 introduced the ZFS boot pool, I canā€™t recall anyone else complaining about this.

5 Likes

And, if itā€™s done right, it mirrors the boot stuffs and mirrors the swap too.

It is a fair criticism, however, that TrueNAS Core and SCALE still use the kernel identifier (sda2, sdb2, sdc2) for the boot-pool.

Not sure why they havenā€™t adopted to use the GPTID or PARTUUID for the boot-pool after all these years?

Donā€™t know about the details these days, but a few years ago the boot process involved more or less blindly running zpool import freenas-boot and hoping for the best. See for instance that time I ended up with a FreeNAS 9.10 (FreeBSD 10) kernel running the userland from FreeNAS 9.3.

1 Like

It is hard to stay polite if developers after more than 10 years donā€™t know you need to use GPT/GEOM labels and/or UUID to setup a zpool, because when a disk fails, the system can rearrange the disks, so you end up with massive problems, data corruption and even a total loss of the pool.

So every zfs pool needs to be created properly and one of the best ways IS with the UUID and to adapt the installation script properly.

This command returns in linux the UUID for sda1

Blockquote
lsblk -n /dev/sda1 -o UUID

We do expect this to change because it is bad practice to use the partition names
assigned by linux such as sda1, etc to create pools to not run into problems when there is a failure.

Besides no excuses here, XigmaNas (FreeBSD) or even Proxmox uses also labels for this reason I do explain. The latter does use GPT labels.

This is for FreeBSD however the same rules applies for Debian Linux.
ref:

The question why people would run things on ZFS RAID-Z1 or a striped mirror (~raid 10) in ZFS is a matter of taste and extra security in some cases, but I do agree in most cases a double or tripple dev mirror is fine

I will have the team take a look at this. But I suspect its what Dan mentioned, if it ainā€™t broke donā€™t fix it. Weā€™ve had pretty much zero reason to go change this so far in terms of real issues being caused by it. However we can at least review and see if its time to modernize that soon. Installer is getting a lot of love in 24.10 anyway, so the timing is good.

3 Likes

Are you referring to the device names being used to construct the boot-pool? or the feature of allowing more devices beyond the 2-drive mirror vdev for the boot-pool?

I just upgraded last night, everything went smoothly. My only issue at the moment is that each page in the web UI takes an absurdly long time to load . The dashboard takes >30 seconds, and the same for data sets, storage, apps ect.

Device names for boot-pool is what I was referring to. But the request for anything beyond a mirrored on boot is pretty surprising, all these years and Iā€™ve not seen any real demand around that.

1 Like

Thatā€™s the more important thing, which I think is long due for TrueNAS.

2 Likes

That is surprising, could use a bug ticket or some further telemetry on that to even begin to guess at what the cause could be. Whats your setup like? This isnā€™t a system with 1500+ Apps and 30 replication jobs running all at once is it? :slight_smile:

Perhaps you should try.

Itā€™s not that you donā€™t have a point, but it also isnā€™t like youā€™ve been complaining about this for the past ten years. Or even for the past one year. Or at all, really, until joining here and setting phasers to VAPORIZE. Iā€™m pretty sureā€“and Iā€™ve been following the forums pretty closely since well before the release of 9.3ā€“that nobody has ever made an issue of this in the past.

So what? Do a clean install, upload a saved config file, and Robert is your fatherā€™s brother. And thatā€™s probably a big part of why nobodyā€™s made an issue of this before: because the impact of a failed boot pool is so minimal.

I havenā€™t used XigmaNAS, but youā€™re incorrect with respect to Proxmox:

root@pve1 āžœ  ~ zpool status
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:04:20 with 0 errors on Sun Apr 14 00:28:21 2024
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda3    ONLINE       0     0     0
	    sdb3    ONLINE       0     0     0

errors: No known data errors

Thatā€™s the pool created by the Proxmox installer, not one Iā€™ve manually created or modified. Where Iā€™ve had to replace disks, Iā€™ve used /dev/disk/by-id/ identifiers, but the Proxmox installer builds the boot pool with sda/sdb/etc.

Edit: Well, the Proxmox installer built the boot pool with sda/sdb/whatever. By 8.2, it is using labels, though still not GPT labels:

So yes, valid point, but dial back the rhetoric a bit. If literally nobody has complained about this in ten years, it canā€™t be that big of a deal.

6 Likes

It will go broken when there are disks down or the controller goes woot. Because they do re-arrange. The current script works that long until a disk or controller has issues.