Unable to install or re-install on new or old hardware which was working before

Quick story time here.

Two weeks I moved my installation (Scale 2410.2) from a C2750D4I which as been running happily for years over multiple version of TrueNAS to an X5704D4U.

I got a couple of hangs on the new board over a couple of weeks and thought, ok I will factory rest the BIOS / UEFI and reinstall / reload config.

I have done this a few times over the years of using TrueNAS so it is not my first rodeo with the process.

Long story short I could not get TrueNAS to reinstall on hardware that it had been running on a few minutes before.

I tried multiple downloads of the iso and multiple USB keys and different USB ports. Tried different settings in the BIOS / UEFI but no luck.

Ok. I need to get it back up and working. I will go to the old board.

I reinstall. I have changed nothing in the BIOS of the old board. I get the SAME failures as the new board! With the same software it was running happily two weeks ago.

So now what? I have two sets of hardware both of which I know CAN run TrueNAS 24.10.2 but neither of which can have it reinstalled fresh.

Any pointers? I do need to get this back up and working ASAP.

I can’t include a link to the image of the errors or the image itself…

spl: loading out-of-tree module taints kernel
zfs: module license ‘CDDL’ taints kernel
Disabling lock debugging due to kernel taint
zfs: module license taints kernel
loaded module v2.2.99-1 zfs pool version 5000 zfs filesystem version 5
begin: sleeping for … done
importing zfs root pool ‘boot-pool’ failure: 1
command: sbin/zpool import -N -f ‘boot-pool’
message: cannot import ‘boot-pool’: no such pool available
failed to import ‘boot-pool’
manually import the pool and exit’

Busybox v1.35.0…

initramfs…

Are you using the same disk as boot pool on both system?

Yes.

Its a mirrored pair of Crucial BX500 240GB (CT240BX500SSD1)

I have made progress. I went back to try and install 23.10. It failed again but with a slightly more useful error about being unable to write to a specific partition on one of the boot disks.

This lead me to pull both drives and attach them to my PC. One was clear but the other had a bunch of weird small partitions on it.

It seems the installer was not wiping the drives properly.

I cleared the partitions with gparted and tried again.

Now TrueNAS installed. I have upgraded from 23.10 → 24.04.2.5 → 24.10.2 and re-imported my data pool.

I am going to setup the rest of my config manually as it is not to complicated.

Interesting when booting I can see the same ‘kernel taint’ errors I had before but it just moves past them.

My boot disks now show as ‘Disks with exported pools’ in the storage dashboard. Should I create a new pool for these?

While I am back up and running I am rather nervous at the the message in the GUI where it says both boot disks have exported pools.

root@artifact[/home/admin]# zpool status -v
  pool: aqueduct
 state: ONLINE
  scan: scrub repaired 0B in 00:13:24 with 0 errors on Sun Mar 16 01:56:19 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        aqueduct                                  ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            7e76b40b-ce61-4e37-b30d-6f7ecaa6296a  ONLINE       0     0     0
            927f23f9-1450-4ef3-a4b4-c91c01e14ae9  ONLINE       0     0     0
            0a727026-95b9-4127-9132-fea5299f5e4c  ONLINE       0     0     0
            ee3e76f3-238b-41e1-80a4-73e662d1acd1  ONLINE       0     0     0
            1f9dfff2-a194-4a7b-9fbd-447a61fa7adb  ONLINE       0     0     0
            592fa13c-6932-4242-98f8-c6ae883c5c39  ONLINE       0     0     0
          raidz1-1                                ONLINE       0     0     0
            91918f13-2443-4c71-90be-78959c1b91e0  ONLINE       0     0     0
            d5878b70-76d5-48bf-95bb-303710e4438a  ONLINE       0     0     0
            2b90f0ac-7c56-4488-93f6-864dc89367f2  ONLINE       0     0     0
            3aeeda9a-a41f-4c81-a868-e569d81a8533  ONLINE       0     0     0
            74eb10d4-a7e4-4d5b-a6ea-19ea97d24604  ONLINE       0     0     0
            b978f07d-8351-4142-95d2-4d6c06475567  ONLINE       0     0     0
        logs
          e436e267-de55-4cca-bab7-e1efd8a4be7e    ONLINE       0     0     0
        spares
          b40965bb-3ae6-4024-bec9-530b5d474370    AVAIL   

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdf3    ONLINE       0     0     0
            sda3    ONLINE       0     0     0

errors: No known data errors
root@artifact[/home/admin]# 

Can I reboot safely or am I just going to be dumped at an initramfs prompt?

Seems to be the same problem here.

IMHO too is the exactly same problem you linked.
Reading what they discuss into jyra, in your place, i would not risk - bit the bullet - and trying wipe again the disks at a low level, don’t know if you already tried what they suggest blkdiscard /dev/nvme0n1; also, try to install directly 24.10.2 without upgrade, if possible

I’ll give it a go this coming weekend. I don’t have the time or inclination before then. I will update this thread when I do.

At least I know why my problem was happening now.

1 Like