Hi,
I had TrueNAS-SCALE-24.10.0.2 running from a USB stick, having a raidz1 and a single disk-pool up and running. (this single disk will become a mirror soon…)
Now I managed to install a new SSD via USB-Adapter as the new OS-Disk and installed and brought back the original configuration.
All looks good, all is there, EXCEPT:
the single disk pool fails with “offline VDEVs” allthough my disk is installed and is even listed in the “available disks”…
this issue is quite similar to this:
Hi everybody,
as a newbie I am only allowed to post 3 times to a thread (sorry, that’s nonsense), so I have to abandon the old one:
here is what I did after the useful tips from @etorix and @SmallBarky
I removed the 1TB cache drive from my dataset
I wiped the (in general usable) 4th drive
I added it again to start resilvering with it
after another 10hrs of wait time now ALL WORKS.
It was still a bug, I’m sure, but at least I’m up and running now with all 4 raidz1 drives after the expansio…
something is buggy, it seems… IMHO
the missing disk is /dev/sdd, it’s 7.3TB and it’s listed with fstab and lsblk, but not with blkid and zpool list
truenas_admin@truenaseva:~$ sudo zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
RAID_EVA 49.1T 29.6T 19.5T - - 0% 60% 1.00x ONLINE /mnt
raidz1-0 49.1T 29.6T 19.5T - - 0% 60.2% - ONLINE
357bc1c8-6508-4081-9db7-71be4817ae36 16.4T - - - - - - - ONLINE
3408cad4-e869-4abe-b72d-333ecbbb150e 16.4T - - - - - - - ONLINE
f0cc6896-7ad9-424e-be95-519f7001f30d 18.2T - - - - - - - ONLINE
boot-pool 111G 2.42G 109G - - 0% 2% 1.00x ONLINE -
sde3 111G 2.42G 109G - - 0% 2.17% - ONLINE
truenas_admin@truenaseva:~$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 16.4T 0 disk
└─sda1 8:1 1 16.4T 0 part
sdb 8:16 1 16.4T 0 disk
└─sdb1 8:17 1 16.4T 0 part
sdc 8:32 1 18.2T 0 disk
└─sdc1 8:33 1 18.2T 0 part
sdd 8:48 1 7.3T 0 disk
└─sdd1 8:49 1 7.3T 0 part
sde 8:64 0 111.8G 0 disk
├─sde1 8:65 0 1M 0 part
├─sde2 8:66 0 512M 0 part
└─sde3 8:67 0 111.3G 0 part
truenas_admin@truenaseva:~$ sudo blkid
/dev/sdb1: LABEL="RAID_EVA" UUID="16625595649128707101" UUID_SUB="13597772573350973926" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="3408cad4-e869-4abe-b72d-333ecbbb150e"
/dev/sde2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="F8AA-28F0" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="f0dd55a6-cc16-4f0e-a947-32ed321ce5c2"
/dev/sde3: LABEL="boot-pool" UUID="7279405744374549895" UUID_SUB="12047905366553194668" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="3f8575de-34b9-4a64-800e-e2969f8fe99d"
/dev/sdc1: LABEL="RAID_EVA" UUID="16625595649128707101" UUID_SUB="9318617404841709141" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="f0cc6896-7ad9-424e-be95-519f7001f30d"
/dev/sda1: LABEL="RAID_EVA" UUID="16625595649128707101" UUID_SUB="312605978666413138" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="357bc1c8-6508-4081-9db7-71be4817ae36"
/dev/sde1: PARTUUID="89464833-68dc-48c8-92ea-79ace6989a6a"
truenas_admin@truenaseva:~$ sudo fdisk -l /dev/sdd
Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000AS0002-1NA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7447A4E1-EB66-4EB7-B90A-D1BF19D89A38
Device Start End Sectors Size Type
/dev/sdd1 2048 15628052479 15628050432 7.3T Solaris /usr & Apple ZFS
truenas_admin@truenaseva:~$
greets
Mike
bacon
November 23, 2024, 3:53pm
2
There is an issue when a partition contains multiple file system signatures. We’ve had multiple cases already in this forum.
Please post the output of the following commands:
sudo blkid --probe /dev/sdd1
sudo wipefs /dev/sdd1
Hi,
thanks, maybe… but I’ve created all freshly from scratch yesterday. now it’s broken?
truenas_admin@truenaseva:~$ sudo blkid --probe /dev/sdd1
dd1
blkid: /dev/sdd1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)
truenas_admin@truenaseva:~$ sudo wipefs /dev/sdd1
DEVICE OFFSET TYPE UUID LABEL
sdd1 0x438 ext4 ded4fed7-c79f-490f-9739-946132988a48
sdd1 0x35000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x34000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x33000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x28000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x27000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x75000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x74000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x73000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x68000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x67000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x66000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x65000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x64000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023b5000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023b4000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023b3000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023a8000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023a7000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023a6000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023a5000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023a4000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023f5000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023f4000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023f3000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023e8000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023e7000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023e6000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023e5000 zfs_member 8305131649943305919 EIGHT_TB
sdd1 0x747023e4000 zfs_member 8305131649943305919 EIGHT_TB
bacon
November 23, 2024, 5:01pm
4
Yeah, it’s really bad UX. Hopefully it get’s fixed in future releases.
You can use this command to wipe the ext4 marker:
sudo wipefs --backup --all -t ext4 /dev/sdd1
The marker is withing the first 8KB, which should be unused by ZFS. Therefore should be safe to erase.
After a reboot everything should work.
2 Likes
@bacon YOU ARE MY HERO!
it removed those stupid 2 bytes of ext4 crap… why was it there?when did it come? I didnt even see it…
thanks & regards,
Mike
1 Like
ok, there is a severe bug in SCALE-24.10.0.2…
again, on a different truenas microserver, I added two new 8TB disks to create a mirror.
Again, it mentions, that one drive is missing, allthough it’s listed to be assigned again.
Doing the trick told by @bacon , I removed the ext4 bytes and fixed the issue…
truenas_admin@truenas:~$ sudo blkid
/dev/sdd1: LABEL="MARBLES" UUID="16445675959518786954" UUID_SUB="2468042289642474092" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="4efafb37-2ac6-4d83-90ef-f3aa623eef5c"
/dev/sdb1: LABEL="RAID" UUID="1901115498841768824" UUID_SUB="12922634147688021886" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="a83058de-032f-4f81-9c0a-f62bb50cc359"
/dev/sdg1: LABEL="RAID" UUID="1901115498841768824" UUID_SUB="8001723523911588941" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="ca7fea98-7ecb-45de-9b1d-efd652dbc15d"
/dev/sde2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="4D2C-4632" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="d2c1c308-71c0-449e-bfcd-6d42c7fa40a7"
/dev/sde3: LABEL="boot-pool" UUID="808278110043727968" UUID_SUB="16749142075965047473" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="380ab19c-2b56-429b-a7e9-c2a6680635e5"
/dev/sdc1: LABEL="RAID" UUID="1901115498841768824" UUID_SUB="538098376802416322" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="bb9f4f4b-9457-4f2b-a398-b9eab36d5e96"
/dev/sda1: LABEL="RAID" UUID="1901115498841768824" UUID_SUB="4407132540304298790" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="6fb1e42b-76a5-44a4-b63a-c608e643435a"
/dev/sde1: PARTUUID="2a2850c3-72a0-4a42-8828-667eb237d350"
truenas_admin@truenas:~$ sudo blkid --probe /dev/sdf1
blkid: /dev/sdf1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)
truenas_admin@truenas:~$ sudo wipefs /dev/sdf1
DEVICE OFFSET TYPE UUID LABEL
sdf1 0x438 ext4 88ddaebe-e641-4891-9f34-1a0bac04626e
sdf1 0x28000 zfs_member 16445675959518786954 MARBLES
sdf1 0x27000 zfs_member 16445675959518786954 MARBLES
sdf1 0x68000 zfs_member 16445675959518786954 MARBLES
sdf1 0x67000 zfs_member 16445675959518786954 MARBLES
sdf1 0x66000 zfs_member 16445675959518786954 MARBLES
sdf1 0x65000 zfs_member 16445675959518786954 MARBLES
sdf1 0x64000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023a8000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023a7000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023a6000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023a5000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023a4000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023e8000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023e7000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023e6000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023e5000 zfs_member 16445675959518786954 MARBLES
sdf1 0x747023e4000 zfs_member 16445675959518786954 MARBLES
truenas_admin@truenas:~$ sudo wipefs --backup --all -t ext4 /dev/sdf1
/dev/sdf1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
truenas_admin@truenas:~$
Broadcast message from root@truenas (Wed 2024-11-27 13:15:21 CET):
The system will reboot now!
bacon
November 27, 2024, 12:45pm
7
Please file a bug report. Mine got closed (Jira ). Maybe if enough people file bug reports it will gather some attention.
1 Like
to behonest, I’m not pro enough in this topic yet, to be capable enough to run a ticket, already reading yours and seeing, that someone closed it, ignoring the fact…
regards
Mike
1 Like