Hello im not Linux expert but understand that will be destroyed all my data.
I tried suggestion width export an import what works but is really annoying.
Do they know what is the root cause of the issue?
Option 2 - using the wipefs
commands - will not erase your data, when using the -n
parameter. This will display if there are any additional filesystem labels (from previously-used disks) that may be interfering with pool mounting.
Here are the Logs from blkid
truenas_admin@truenas[~]$ sudo blkid --probe /dev/sda1
/dev/sda1: UUID=“b29fa450-f494-bc2d-f917-e66c654e69ea” UUID_SUB=“1a157d0c-4913-1601-b64f-d3db93975ba6” LABEL=“truenas:swap1” VERSION=“1.2” TYPE=“linux_raid_member” USAGE=“raid” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_NAME=“data” PART_ENTRY_UUID=“2221560f-91a9-4e9b-807e-8df5fbc0ef10” PART_ENTRY_TYPE=“6a898cc3-1dd2-11b2-99a6-080020736631” PART_ENTRY_NUMBER=“1” PART_ENTRY_OFFSET=“2048” PART_ENTRY_SIZE=“30001852416” PART_ENTRY_DISK=“8:0”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/sdb1
/dev/sdb1: UUID=“2274b232-16b5-8348-c501-5fbdb474bbbe” UUID_SUB=“b2ba7f31-a809-c83b-1faa-5fcdbe4f9b66” LABEL=“truenas:swap0” VERSION=“1.2” TYPE=“linux_raid_member” USAGE=“raid” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_NAME=“data” PART_ENTRY_UUID=“c801a42e-9304-4169-9013-c9ba867041cb” PART_ENTRY_TYPE=“6a898cc3-1dd2-11b2-99a6-080020736631” PART_ENTRY_NUMBER=“1” PART_ENTRY_OFFSET=“2048” PART_ENTRY_SIZE=“30001852416” PART_ENTRY_DISK=“8:16”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/sdc1
/dev/sdc1: UUID=“b29fa450-f494-bc2d-f917-e66c654e69ea” UUID_SUB=“b45c0571-c9c5-669b-1753-04df5a8b065d” LABEL=“truenas:swap1” VERSION=“1.2” TYPE=“linux_raid_member” USAGE=“raid” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_NAME=“data” PART_ENTRY_UUID=“19ca012b-c023-4ff1-936b-a31951418583” PART_ENTRY_TYPE=“6a898cc3-1dd2-11b2-99a6-080020736631” PART_ENTRY_NUMBER=“1” PART_ENTRY_OFFSET=“2048” PART_ENTRY_SIZE=“30001852416” PART_ENTRY_DISK=“8:32”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/sdd1
/dev/sdd1: UUID=“2274b232-16b5-8348-c501-5fbdb474bbbe” UUID_SUB=“8905edb4-88c6-c9d4-778f-91221ca1966a” LABEL=“truenas:swap0” VERSION=“1.2” TYPE=“linux_raid_member” USAGE=“raid” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_NAME=“data” PART_ENTRY_UUID=“23fe06e8-17f3-498e-b042-fe3849a57903” PART_ENTRY_TYPE=“6a898cc3-1dd2-11b2-99a6-080020736631” PART_ENTRY_NUMBER=“1” PART_ENTRY_OFFSET=“2048” PART_ENTRY_SIZE=“30001852416” PART_ENTRY_DISK=“8:48”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/nvme0n1p1
/dev/nvme0n1p1: PART_ENTRY_SCHEME=“gpt” PART_ENTRY_UUID=“80674f7f-7936-4379-b2c9-420985a11329” PART_ENTRY_TYPE=“21686148-6449-6e6f-744e-656564454649” PART_ENTRY_FLAGS=“0x4” PART_ENTRY_NUMBER=“1” PART_ENTRY_OFFSET=“4096” PART_ENTRY_SIZE=“2048” PART_ENTRY_DISK=“259:0”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/nvme0n1p2
/dev/nvme0n1p2: LABEL_FATBOOT=“EFI” LABEL=“EFI” UUID=“A290-88A5” VERSION=“FAT32” BLOCK_SIZE=“512” TYPE=“vfat” USAGE=“filesystem” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_UUID=“1a40d718-4693-470a-8731-2f8693ada95d” PART_ENTRY_TYPE=“c12a7328-f81f-11d2-ba4b-00a0c93ec93b” PART_ENTRY_NUMBER=“2” PART_ENTRY_OFFSET=“6144” PART_ENTRY_SIZE=“1048576” PART_ENTRY_DISK=“259:0”
truenas_admin@truenas[~]$ sudo blkid --probe /dev/nvme0n1p3
/dev/nvme0n1p3: VERSION=“5000” LABEL=“boot-pool” UUID=“13847354830264913250” UUID_SUB=“15545680114051809662” BLOCK_SIZE=“4096” TYPE=“zfs_member” USAGE=“filesystem” PART_ENTRY_SCHEME=“gpt” PART_ENTRY_UUID=“f9eb5e77-8408-4310-aa1b-2096c6e9b844” PART_ENTRY_TYPE=“6a898cc3-1dd2-11b2-99a6-080020736631” PART_ENTRY_NUMBER=“3” PART_ENTRY_OFFSET=“1054720” PART_ENTRY_SIZE=“936648335” PART_ENTRY_DISK=“259:0”
truenas_admin@truenas[~]$
Wha are the commands? you mean use wipefs?
Am I correct that these drives where previously used in a TrueNAS Core system? And then you created a new pool in TrueNAS Scale?
To elaborate the steps for each option in more detail:
Option 1: Recreate the pool
This is useful if you don’t have any data in the pool. This does destroy all data in the pool.
- Export the pool in the UI. Do not select the “Destroy data on this pool”. Tick “Confirm Export/Disconnect” checkbox and press “Export/Disconnect”
- Go to the disks screen. Click on each disk you want to use in your pool (sda,sdb,sdc,sdd) and press the wipe button, it’s sufficient to select the “quick” method in the dropdown.
- After all four disks has been wiped, create the pool as you did initially. You’ll have no more issues rebooting
Option 2: Erase file system marker
This will fix the reboot issue without deleting any data.
Basically you do a careful, targeted wipefs on each partition. Your probe output showed that an old raid signature is detected.
You can first issue the following command to identify file system markers:
wipefs --no-act /dev/DISKPART
This should give you an output consisting of mostly zfs_member
entries and a single linux_raid_member
entry.
**DEVICE OFFSET TYPE UUID LABEL
xxx 0x1000 linux_raid_member cf174fb4-6f7b-943b-c217-ec7b7cbbea5c truenas:swap0
xxx 0x3f000 zfs_member 2099482002614675549 boot-pool
xxx 0x3e000 zfs_member 2099482002614675549 boot-pool
You’d then proceed by wiping the linux_raid_member
signature.
wipefs --backup --all -t linux_raid_member /dev/DISKPART
But it is safer if you post the wipefs --no-act ..
output here so we can verify that it indeed safe to erase.
Please post output using Preformatted Text (ctrl+e). It looks like (</>) on toolbar
It makes output easier to read
Hello bacon, Thank You.
First time I was thinking the I have HW issue with HBA controller. But after different test and installed Windows Server 2022 was clear the is not the issue. Tired and overwhelming the second day is not possible do simple settings of Truenas Scale was also try Truenas Core but were not possible recognize Broadcom 25 Gbit network interface. So I come back to Truenas Scale (Linux Drivers are here better supported) but stuck widht same issue and suspect correctly that is a bug in SW. Now I try you option 1. Hopefully help to deliver the solution. BTW. Im looking forward to solve issue in next release.
Hello, I experienced the exact same problem as described here. I managed to solve it in a way that did not require to completely recreate the pool, deleting all the files.
Sadly I could not try Option 2 as described by user bacon as both the blkid and wipefs commands gave way to little output to try to manually fix the filesystem issues in my case.
The solution I propose works for a two HDD mirror setup and I cannot guarantee a similar procedure will work for a RAID setup with more disks as well but with the best of my knowledge I think it should also work.
Since I am a new user and cannot post links yet: Here is the title of the discussion on this forum where I posted detailed instructions on how to resolve the issue without having to recreated the pool from scratch: “Have to export/import pool each reboot”.
Thanks for posting this! Strangely, this same pool giving me trouble on boot experienced a disconnect in the middle of the day. Only time it has happened other than boot. Same export / import process “resolved” this too.
I reviewed your post in the other thread and will definitely give it a shot - thanks for laying it out nicely.
Super-helpful post! I encountered the same issue with version 24.10.0.2. Removing and recreating the pool was the only solution that worked for me. However, this approach doesn’t inspire confidence in the product’s reliability. Thankfully, I had a backup, but reliability should be a core feature of any NAS.
我也在 truenas scale 24.10.0.2遇到同樣的問題,
即使我依照 @donnyG 提供的方法設定了我的uefi,
問題依然無法解決,
最後我重新安裝truenas scale 24.04.2.5後這問題就不再出現了。
CPU: i5-4590
MotherBoard: ASUS B85M-PLUS
Storage: 4tb WD red hdd * 4 (Raidz2)
My output is:
DEVICE OFFSET TYPE UUID LABEL
sdg 0x200 gpt
sdg 0xcbbbffffe00 gpt
sdg 0x1fe PMBR
I have 6 disks, all formerly used in a Synology before I created this pool. Any tips?
You need to run the command on the partition. In your case it would be wipefs --no-act /dev/sdg1
.
Gotcha, thanks! That output is below. I am guessing I need to remove that ext4 reference, is that correct?
DEVICE OFFSET TYPE UUID LABEL
sdg1 0x438 ext4 89dabcc0-2604-49f1-b380-ed22df1291e2 1.42.6-15217
sdg1 0x3f000 zfs_member 6892834326680007410 Data
sdg1 0x3e000 zfs_member 6892834326680007410 Data
sdg1 0x3d000 zfs_member 6892834326680007410 Data
sdg1 0x3c000 zfs_member 6892834326680007410 Data
sdg1 0x3b000 zfs_member 6892834326680007410 Data
sdg1 0x3a000 zfs_member 6892834326680007410 Data
sdg1 0x39000 zfs_member 6892834326680007410 Data
sdg1 0x38000 zfs_member 6892834326680007410 Data
sdg1 0x37000 zfs_member 6892834326680007410 Data
sdg1 0x36000 zfs_member 6892834326680007410 Data
sdg1 0x35000 zfs_member 6892834326680007410 Data
sdg1 0x34000 zfs_member 6892834326680007410 Data
sdg1 0x33000 zfs_member 6892834326680007410 Data
sdg1 0x32000 zfs_member 6892834326680007410 Data
sdg1 0x31000 zfs_member 6892834326680007410 Data
sdg1 0x30000 zfs_member 6892834326680007410 Data
sdg1 0x2f000 zfs_member 6892834326680007410 Data
sdg1 0x2e000 zfs_member 6892834326680007410 Data
sdg1 0x2d000 zfs_member 6892834326680007410 Data
sdg1 0x2c000 zfs_member 6892834326680007410 Data
sdg1 0x2b000 zfs_member 6892834326680007410 Data
sdg1 0x2a000 zfs_member 6892834326680007410 Data
sdg1 0x29000 zfs_member 6892834326680007410 Data
sdg1 0x28000 zfs_member 6892834326680007410 Data
sdg1 0x27000 zfs_member 6892834326680007410 Data
sdg1 0x26000 zfs_member 6892834326680007410 Data
sdg1 0x25000 zfs_member 6892834326680007410 Data
sdg1 0x24000 zfs_member 6892834326680007410 Data
sdg1 0x23000 zfs_member 6892834326680007410 Data
sdg1 0x7f000 zfs_member 6892834326680007410 Data
sdg1 0x7e000 zfs_member 6892834326680007410 Data
sdg1 0x7d000 zfs_member 6892834326680007410 Data
sdg1 0x7c000 zfs_member 6892834326680007410 Data
sdg1 0x7b000 zfs_member 6892834326680007410 Data
sdg1 0x7a000 zfs_member 6892834326680007410 Data
sdg1 0x79000 zfs_member 6892834326680007410 Data
sdg1 0x78000 zfs_member 6892834326680007410 Data
sdg1 0x77000 zfs_member 6892834326680007410 Data
sdg1 0x76000 zfs_member 6892834326680007410 Data
sdg1 0x75000 zfs_member 6892834326680007410 Data
sdg1 0x74000 zfs_member 6892834326680007410 Data
sdg1 0x73000 zfs_member 6892834326680007410 Data
sdg1 0x72000 zfs_member 6892834326680007410 Data
sdg1 0x71000 zfs_member 6892834326680007410 Data
sdg1 0x70000 zfs_member 6892834326680007410 Data
sdg1 0x6f000 zfs_member 6892834326680007410 Data
sdg1 0x6e000 zfs_member 6892834326680007410 Data
sdg1 0x6d000 zfs_member 6892834326680007410 Data
sdg1 0x6c000 zfs_member 6892834326680007410 Data
sdg1 0x6b000 zfs_member 6892834326680007410 Data
sdg1 0x6a000 zfs_member 6892834326680007410 Data
sdg1 0x69000 zfs_member 6892834326680007410 Data
sdg1 0x68000 zfs_member 6892834326680007410 Data
sdg1 0x67000 zfs_member 6892834326680007410 Data
sdg1 0x66000 zfs_member 6892834326680007410 Data
sdg1 0x65000 zfs_member 6892834326680007410 Data
sdg1 0x64000 zfs_member 6892834326680007410 Data
sdg1 0x63000 zfs_member 6892834326680007410 Data
sdg1 0x62000 zfs_member 6892834326680007410 Data
sdg1 0x61000 zfs_member 6892834326680007410 Data
sdg1 0x60000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdbf000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdbe000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdbd000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdbc000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdbb000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdba000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb9000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb8000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb7000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb6000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb5000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb4000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb3000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb2000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb1000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdb0000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdaf000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdae000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdad000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdac000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdab000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdaa000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda9000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda8000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda7000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda6000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda5000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda4000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda3000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda2000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda1000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfda0000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdff000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdfe000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdfd000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdfc000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdfb000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdfa000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf9000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf8000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf7000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf6000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf5000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf4000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf3000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf2000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf1000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdf0000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdef000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdee000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfded000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdec000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdeb000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfdea000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde9000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde8000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde7000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde6000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde5000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde4000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde3000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde2000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde1000 zfs_member 6892834326680007410 Data
sdg1 0xcbbbfde0000 zfs_member 6892834326680007410 Data
Correct. Command to wipe would be:
wipefs --all --backup -t ext4 /dev/sdg1
.
Repeat the same for the other drives.
Thanks that did it! Pool is online and good after rebooting now.
For the benefit of others, I ran the above command with --no-act first to check what it would do before it did it. Also, you need to export/disconnect the pool before running the command.
Hi Bacon,
I have the excat same issue. I managed to do the “wipefs --no-act” command but i’m unsure which file system markers are the cause for the problem. My config:
NVMe-Mirror (1 TB - BrandNew) Data-Storage
HDD-Mirror (8 TB - Transfered from Synology DS218J) Data-Storage
BootDisk (SSD) (3 Partitions - Boot installation on Part1)
I can’t identify which file system markers on which partitions had to be wiped in order to function correctly. I’m not a native linux shell user and feel unsecure in using commands I don’t “understand”.
Either the WD Red or the Boot-SSD must be wiped. The NVMe Disks were brand-new on installation. I need some advice, i don’t want to destroy the pool/vdev.
Please post the output.
Of which sda or NVMe disk exactly?
I figured that you wanted help interpreting the result of the wipefs command, if so, you need to post the result here so we can see it.