TrueNAS SCALE 24.04.0 Now Released!

Upgrade all went smoothly, although I’m having the silly network stats for apps like some others have reported, and also the distracting constant refreshing of the panels in the apps screen.

Please can we select the columns to display for apps and also choose how often to have the panels refresh?

Nice to have the HDD temps showing up again though, and the network bridge / K8s race condition fixed.

Upgraded without issues…

1 Like

I see an issue now, which I maybe also had before the last update.
Somebody see same, and what can be done…??
NTP Issue

Can you please open a new topic in TrueNAS General? Thank you.

yes okay…sorry

I was using 24.04-RC.1, everything was normal.
I updated to 24.04.0 and I’m having a problem with one of my pools, as soon as I take a backup of the settings, or even if I install clean and just try to import the pool, the error is the same.
The worst thing is that I can’t go back to RC.1, because kernel panic also happens.

my config:
Processor: i7-4790k
motherboard: Asus Z97-k R2.0
RAM: 32GiB HyperX
Disks - pools:
1 ssd - WD_Green_M.2_2280_240GB => boot-pool
4x1TiB - WD10EXEX => pool-HD - RAIDZ1
2x500GiB - WDC WDS500G1B0C-00S6U0 => pool-ssd - Stripe
1x4TiB - ST4000VX007-2DT166 => pool-trash

The problem is precisely in the SSD pool, the other 2 work normally, as long as I don’t import the SSD pool.

The kernel panic happens when trying to import the pool.

Did I lose everything, or is there a way to at least try to recover the data somehow, using command lines?

Since you are using desktop hardware with likely non-ECC RAM, please run good long memory test. While spacemap corruptions like this may mean some bugs in a code (in this case it may be related to device removal feature of ZFS, that is not so widely used), quite often they are caused by memory corruptions. What’s about data recovery, you should be able to import pool read-only from command line, since read-only imports do not even read space maps, among many other things.

1 Like

The changelog mentions that “ZFS ARC memory allocations are updated and behave identically to TrueNAS CORE.” What has changed, what used to be the default before? Was this a bug?

Previously, Scale limited ARC to 50% of RAM, whereas Core allowed ARC to use most of the free RAM.

Scale now behaves like Core w.r.t ARC sizing


Uneventful direct upgrade from 24.04-RC1 to Release.

This is not an officially supported upgrade path I believe, but if it worked it would save a lot of time and bother.

Seems to have worked fine for me.

1 Like

So far the least painful update I’ve ever had. Nothing is obviously broken after ~3 hours of validations. 9/10 (because you can never be too sure)

EDIT: Lol - you see? This is why there is never a 10/10 score.


For everyone’s information, there’s been over 5,000 updates in the first 2 days.

If you have had an issue, report it here, start a new thread and add a link to the thread. We’ll be triaging and looking for common issues.

1 Like

Upgraded from the latest cobia, on rebooting my apps service didn’t come back online. But another reboot fixed it, so all good.

update from RC1 to this. Didn’t notice any specific issue. Maybe fewer alerts?

I only get 1 alert now only which is demanding me not to use usb for boot drive :smiling_face_with_tear:

After more than a decade You guys still did not figure out You need to use GPT Labels to create Your boot-pool. NEVER use partition names. This is bad practice.
use GPT, GEOM, UUID labels to create Your pools. Same time I notice, when You select several disks at installation, You can NOT select the type of ZFS structure you want. Some people want striped mirrors ~ raid 10 in ZFS. Others want ZFS1. Right now the installation makes a mirror of all disks you do select. NOT DONE.

It has to be:

zpool add boot-pool mirror /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

If you want to suggest improvements, we insist that you do so professionally and politely.

Only mirrors are supported because there’s no value to RAIDZ, never mind DRAID, for a boot pool. Don’t expect this to change.


Obviously it is done. But why do you care? Seriously, why would anyone want the boot pool in striped mirrors or RAIDZn?

No, you can’t. You’ve never been able to. And in the ten years or so since 9.3 introduced the ZFS boot pool, I can’t recall anyone else complaining about this.


And, if it’s done right, it mirrors the boot stuffs and mirrors the swap too.

It is a fair criticism, however, that TrueNAS Core and SCALE still use the kernel identifier (sda2, sdb2, sdc2) for the boot-pool.

Not sure why they haven’t adopted to use the GPTID or PARTUUID for the boot-pool after all these years?

Don’t know about the details these days, but a few years ago the boot process involved more or less blindly running zpool import freenas-boot and hoping for the best. See for instance that time I ended up with a FreeNAS 9.10 (FreeBSD 10) kernel running the userland from FreeNAS 9.3.

1 Like