TrueNAS SCALE 24.04.0 Now Released!

I was using 24.04-RC.1, everything was normal.
I updated to 24.04.0 and I’m having a problem with one of my pools, as soon as I take a backup of the settings, or even if I install clean and just try to import the pool, the error is the same.
The worst thing is that I can’t go back to RC.1, because kernel panic also happens.

my config:
Processor: i7-4790k
motherboard: Asus Z97-k R2.0
RAM: 32GiB HyperX
Disks - pools:
1 ssd - WD_Green_M.2_2280_240GB => boot-pool
4x1TiB - WD10EXEX => pool-HD - RAIDZ1
2x500GiB - WDC WDS500G1B0C-00S6U0 => pool-ssd - Stripe
1x4TiB - ST4000VX007-2DT166 => pool-trash

The problem is precisely in the SSD pool, the other 2 work normally, as long as I don’t import the SSD pool.

The kernel panic happens when trying to import the pool.

Did I lose everything, or is there a way to at least try to recover the data somehow, using command lines?

Since you are using desktop hardware with likely non-ECC RAM, please run good long memory test. While spacemap corruptions like this may mean some bugs in a code (in this case it may be related to device removal feature of ZFS, that is not so widely used), quite often they are caused by memory corruptions. What’s about data recovery, you should be able to import pool read-only from command line, since read-only imports do not even read space maps, among many other things.

1 Like

The changelog mentions that “ZFS ARC memory allocations are updated and behave identically to TrueNAS CORE.” What has changed, what used to be the default before? Was this a bug?

Previously, Scale limited ARC to 50% of RAM, whereas Core allowed ARC to use most of the free RAM.

Scale now behaves like Core w.r.t ARC sizing

2 Likes

Uneventful direct upgrade from 24.04-RC1 to Release.

This is not an officially supported upgrade path I believe, but if it worked it would save a lot of time and bother.

Seems to have worked fine for me.

1 Like

So far the least painful update I’ve ever had. Nothing is obviously broken after ~3 hours of validations. 9/10 (because you can never be too sure)

EDIT: Lol - you see? This is why there is never a 10/10 score.

3 Likes

For everyone’s information, there’s been over 5,000 updates in the first 2 days.

If you have had an issue, report it here, start a new thread and add a link to the thread. We’ll be triaging and looking for common issues.

1 Like

Upgraded from the latest cobia, on rebooting my apps service didn’t come back online. But another reboot fixed it, so all good.

update from RC1 to this. Didn’t notice any specific issue. Maybe fewer alerts?

I only get 1 alert now only which is demanding me not to use usb for boot drive :smiling_face_with_tear:

After more than a decade You guys still did not figure out You need to use GPT Labels to create Your boot-pool. NEVER use partition names. This is bad practice.
use GPT, GEOM, UUID labels to create Your pools. Same time I notice, when You select several disks at installation, You can NOT select the type of ZFS structure you want. Some people want striped mirrors ~ raid 10 in ZFS. Others want ZFS1. Right now the installation makes a mirror of all disks you do select. NOT DONE.

It has to be:

Blockquote
blkid
lsblk --output NAME,FSTYPE,LABEL,UUID,MODE
zpool add boot-pool mirror /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

If you want to suggest improvements, we insist that you do so professionally and politely.

Only mirrors are supported because there’s no value to RAIDZ, never mind DRAID, for a boot pool. Don’t expect this to change.

5 Likes

Obviously it is done. But why do you care? Seriously, why would anyone want the boot pool in striped mirrors or RAIDZn?

No, you can’t. You’ve never been able to. And in the ten years or so since 9.3 introduced the ZFS boot pool, I can’t recall anyone else complaining about this.

5 Likes

And, if it’s done right, it mirrors the boot stuffs and mirrors the swap too.

It is a fair criticism, however, that TrueNAS Core and SCALE still use the kernel identifier (sda2, sdb2, sdc2) for the boot-pool.

Not sure why they haven’t adopted to use the GPTID or PARTUUID for the boot-pool after all these years?

Don’t know about the details these days, but a few years ago the boot process involved more or less blindly running zpool import freenas-boot and hoping for the best. See for instance that time I ended up with a FreeNAS 9.10 (FreeBSD 10) kernel running the userland from FreeNAS 9.3.

1 Like

It is hard to stay polite if developers after more than 10 years don’t know you need to use GPT/GEOM labels and/or UUID to setup a zpool, because when a disk fails, the system can rearrange the disks, so you end up with massive problems, data corruption and even a total loss of the pool.

So every zfs pool needs to be created properly and one of the best ways IS with the UUID and to adapt the installation script properly.

This command returns in linux the UUID for sda1

Blockquote
lsblk -n /dev/sda1 -o UUID

We do expect this to change because it is bad practice to use the partition names
assigned by linux such as sda1, etc to create pools to not run into problems when there is a failure.

Besides no excuses here, XigmaNas (FreeBSD) or even Proxmox uses also labels for this reason I do explain. The latter does use GPT labels.

This is for FreeBSD however the same rules applies for Debian Linux.
ref:

The question why people would run things on ZFS RAID-Z1 or a striped mirror (~raid 10) in ZFS is a matter of taste and extra security in some cases, but I do agree in most cases a double or tripple dev mirror is fine

I will have the team take a look at this. But I suspect its what Dan mentioned, if it ain’t broke don’t fix it. We’ve had pretty much zero reason to go change this so far in terms of real issues being caused by it. However we can at least review and see if its time to modernize that soon. Installer is getting a lot of love in 24.10 anyway, so the timing is good.

3 Likes

Are you referring to the device names being used to construct the boot-pool? or the feature of allowing more devices beyond the 2-drive mirror vdev for the boot-pool?

I just upgraded last night, everything went smoothly. My only issue at the moment is that each page in the web UI takes an absurdly long time to load . The dashboard takes >30 seconds, and the same for data sets, storage, apps ect.

Device names for boot-pool is what I was referring to. But the request for anything beyond a mirrored on boot is pretty surprising, all these years and I’ve not seen any real demand around that.

1 Like