Am I correct that these drives where previously used in a TrueNAS Core system? And then you created a new pool in TrueNAS Scale?
To elaborate the steps for each option in more detail:
Option 1: Recreate the pool
This is useful if you don’t have any data in the pool. This does destroy all data in the pool.
- Export the pool in the UI. Do not select the “Destroy data on this pool”. Tick “Confirm Export/Disconnect” checkbox and press “Export/Disconnect”
- Go to the disks screen. Click on each disk you want to use in your pool (sda,sdb,sdc,sdd) and press the wipe button, it’s sufficient to select the “quick” method in the dropdown.
- After all four disks has been wiped, create the pool as you did initially. You’ll have no more issues rebooting
Option 2: Erase file system marker
This will fix the reboot issue without deleting any data.
Basically you do a careful, targeted wipefs on each partition. Your probe output showed that an old raid signature is detected.
You can first issue the following command to identify file system markers:
wipefs --no-act /dev/DISKPART
This should give you an output consisting of mostly zfs_member
entries and a single linux_raid_member
entry.
**DEVICE OFFSET TYPE UUID LABEL
xxx 0x1000 linux_raid_member cf174fb4-6f7b-943b-c217-ec7b7cbbea5c truenas:swap0
xxx 0x3f000 zfs_member 2099482002614675549 boot-pool
xxx 0x3e000 zfs_member 2099482002614675549 boot-pool
You’d then proceed by wiping the linux_raid_member
signature.
wipefs --backup --all -t linux_raid_member /dev/DISKPART
But it is safer if you post the wipefs --no-act ..
output here so we can verify that it indeed safe to erase.