I was using 24.04-RC.1, everything was normal.
I updated to 24.04.0 and I’m having a problem with one of my pools, as soon as I take a backup of the settings, or even if I install clean and just try to import the pool, the error is the same.
The worst thing is that I can’t go back to RC.1, because kernel panic also happens.
Since you are using desktop hardware with likely non-ECC RAM, please run good long memory test. While spacemap corruptions like this may mean some bugs in a code (in this case it may be related to device removal feature of ZFS, that is not so widely used), quite often they are caused by memory corruptions. What’s about data recovery, you should be able to import pool read-only from command line, since read-only imports do not even read space maps, among many other things.
The changelog mentions that “ZFS ARC memory allocations are updated and behave identically to TrueNAS CORE.” What has changed, what used to be the default before? Was this a bug?
After more than a decade You guys still did not figure out You need to use GPT Labels to create Your boot-pool. NEVER use partition names. This is bad practice.
use GPT, GEOM, UUID labels to create Your pools. Same time I notice, when You select several disks at installation, You can NOT select the type of ZFS structure you want. Some people want striped mirrors ~ raid 10 in ZFS. Others want ZFS1. Right now the installation makes a mirror of all disks you do select. NOT DONE.
Obviously it is done. But why do you care? Seriously, why would anyone want the boot pool in striped mirrors or RAIDZn?
No, you can’t. You’ve never been able to. And in the ten years or so since 9.3 introduced the ZFS boot pool, I can’t recall anyone else complaining about this.
Don’t know about the details these days, but a few years ago the boot process involved more or less blindly running zpool import freenas-boot and hoping for the best. See for instance that time I ended up with a FreeNAS 9.10 (FreeBSD 10) kernel running the userland from FreeNAS 9.3.
It is hard to stay polite if developers after more than 10 years don’t know you need to use GPT/GEOM labels and/or UUID to setup a zpool, because when a disk fails, the system can rearrange the disks, so you end up with massive problems, data corruption and even a total loss of the pool.
So every zfs pool needs to be created properly and one of the best ways IS with the UUID and to adapt the installation script properly.
This command returns in linux the UUID for sda1
Blockquote
lsblk -n /dev/sda1 -o UUID
We do expect this to change because it is bad practice to use the partition names
assigned by linux such as sda1, etc to create pools to not run into problems when there is a failure.
Besides no excuses here, XigmaNas (FreeBSD) or even Proxmox uses also labels for this reason I do explain. The latter does use GPT labels.
This is for FreeBSD however the same rules applies for Debian Linux.
ref:
The question why people would run things on ZFS RAID-Z1 or a striped mirror (~raid 10) in ZFS is a matter of taste and extra security in some cases, but I do agree in most cases a double or tripple dev mirror is fine
I will have the team take a look at this. But I suspect its what Dan mentioned, if it ain’t broke don’t fix it. We’ve had pretty much zero reason to go change this so far in terms of real issues being caused by it. However we can at least review and see if its time to modernize that soon. Installer is getting a lot of love in 24.10 anyway, so the timing is good.
Are you referring to the device names being used to construct the boot-pool? or the feature of allowing more devices beyond the 2-drive mirror vdev for the boot-pool?
I just upgraded last night, everything went smoothly. My only issue at the moment is that each page in the web UI takes an absurdly long time to load . The dashboard takes >30 seconds, and the same for data sets, storage, apps ect.
Device names for boot-pool is what I was referring to. But the request for anything beyond a mirrored on boot is pretty surprising, all these years and I’ve not seen any real demand around that.