Migrated from CORE to SCALE - pool missing

Hello, I have non-critical / backed-up data on a home made TrueNAS server used for media at home.

A few days ago, I migrated from CORE to SCALE and have been having trouble getting my BLACKBOX01 pool back online.

The pool did not immediately show as available (the following from memory so maybe a bit hinky):

  • Tried to be a good boy and read up on the issue, forums etc.
  • Verified that I’ve got a backup CONFIG available from CORE and followed suggestions to export the pool via GUI (not SHELL, SSH), deselecting all 3 options.
  • Attempted to import the pool via GUI, it is no longer listed.
  • Did another day of forum reading…
  • Drunkenly tried several (likely dumb) recommended CLI methods based on sudo zpool import , including options in combinations of “-a” , “-f” and what have you. Initially sudo zpool info only showed boot pool I believe.
    -Have since spent several days reading and working on it w/ no progress.

Anywho, I have an “ok” level of knowledge on TrueNAS, mainly from building and maintaining the system over the years; but, definitely not an expert; so, I’d really appreciate a bit of help.

Info - not sure what info ppl might need, I can get whatever info could be useful:

  • OS Version: TrueNAS-SCALE-24.04.2.2
  • 2x Zeon E5-2670 CPUs
  • 32GiB ECC RAM
  • Boot drives are 2x SSD’s mirrrored
  • Data drives are 5x 4TB WE Red on the motherboard direct (I believe)
  • sudo zpool list output : only boot pool
  • sudo lsblk output : shows no mount points for data drives.

So any help I could get would be great. I can rebuild but would rather not. Did try to reload a prior config (CORE, could not locate a SCALE pre-drunken-stupidity config :exploding_head:, no changes. )

Thanks in advance!!!

Hello and welcome to the forums.

Can you recall how your 5 x 4TB data drives were configured ie RAIDZ1, Z2 or mirrors with a hot-spare perhaps?

Can you post the output of lsblk.

Does zpool import give you any output?

This bit is interesting. It may be my ignorance but does it really suggest you export your pool before upgrade? Are you :100: sure you didn’t check the box to mark disks as new when you exported the pool?

Thanks I appreciate that! RAID 1 I believe. Redundant for sure.

image

RAID1 i.e mirrors? Or RAID-Z1 (bit like RAID5 with one disk parity)?

RAID1 i.e mirrors? Or RAID-Z1 (bit like RAID5 with one disk parity)?

Yep, most likely.

Thanks!

…does it really suggest you export your pool before upgrade?

Well, here’s where “drunken Jim” f’d over “sober Jim”, I didn’t really review any documentation at all before upgrade to SCALE, just backed up the config (which now appears to be gone, I do have older versions) and upgraded. I didn’t export the pool until the OS was upgraded; but, did it through the GUI which I’ve read is not the same as doing so in the CLI / SSL.

Are you :100: sure you didn’t check the box to mark disks as new when you exported the pool?

Let’s say “100% barring some bizarre mis-click”

The config file isn’t needed to access your pool but it holds other information such as your system settings, users etc but your pool should be fine without it. The exporting of your pool before upgrade sounds odd to me and not something I’ve ever done before however again this alone should not have caused you to lose your pool.

The only thing that makes sense to me atm is that perhaps drunken Jim when exporting the pool ticked the top box ‘mark disks as new’ as this would have wiped all the drives thus destroying the pool.

Does this sound at all plausible?

1 Like

And just to be sure can you share the entire output of zpool status.

Ugh, apologies for the edits, not good w/ forum posting syntax…mainly a reader until now lol.

…perhaps drunken Jim when exporting the pool ticked the top box ‘mark disks as new’ as this would have wiped all the drives thus destroying the pool.

I do recall making sure that all check boxes were clear; but, maybe take that w/ a bit of salt

And just to be sure can you share the entire output of zpool status.

Yeah it doesn’t look good buddy. Your drives are there as we can see them with lsblk and your boot pool is good even confirming the update as it’s suggesting it was created using an older version of ZFS. If you hadn’t mentioned exporting your pool I would have been at a total loss but I fear when you did this accidentally you may have marked the disks as new as nothing else makes any sense. Let’s see if anyone else has any other thoughts before we totally give up.

Let’s see if anyone else has any other thoughts before we totally give up

Ok, much thanks; I’m still “pretty sure :woozy_face: that I didn’t”; but, who knows?

Let’s take a look at the output of zpool history on the off chance.

2024-09-18.07:53:57 zfs set canmount=noauto freenas-boot/ROOT/default
2024-09-18.07:54:02 zfs promote freenas-boot/ROOT/13.0-U6.2
2024-09-20.15:59:19 zfs create -o mountpoint=legacy -o truenas:kernel_version=6. 6.32-production+truenas -o zectl:keep=False freenas-boot/ROOT/24.04.2.2
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=of f freenas-boot/ROOT/24.04.2.2/audit
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard freenas-boo t/ROOT/24.04.2.2/conf
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=of f freenas-boot/ROOT/24.04.2.2/data
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=of f freenas-boot/ROOT/24.04.2.2/mnt
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard freenas-boo t/ROOT/24.04.2.2/etc
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard freenas-boo t/ROOT/24.04.2.2/home
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o acltype=off -o aclmode=discard freenas-boot/ROOT/24.04 .2.2/opt
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o acltype=off -o aclmode=discard freenas-boot/ROOT/24.04 .2.2/root
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o acl type=off -o aclmode=discard -o atime=off freenas-boot/ROOT/24.04.2.2/usr
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o acltype=off -o aclmode=discard -o atime=off freenas-bo ot/ROOT/24.04.2.2/var
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard freenas-boo t/ROOT/24.04.2.2/var/ca-certificates
2024-09-20.15:59:19 zfs create -u -o mountpoint=legacy -o canmount=noauto -o set uid=off -o devices=off -o exec=off -o atime=off freenas-boot/ROOT/24.04.2.2/var/ log
2024-09-20.16:00:36 zpool set bootfs=freenas-boot/ROOT/24.04.2.2 freenas-boot
2024-09-20.16:00:37 zfs set truenas:12=1 freenas-boot/ROOT/13.0-U6.2
2024-09-20.16:00:37 zfs destroy -r freenas-boot/grub
2024-09-20.16:00:37 zfs create -o mountpoint=legacy freenas-boot/grub
2024-09-20.16:00:41 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/audit
2024-09-20.16:00:41 zfs set mountpoint=/audit freenas-boot/ROOT/24.04.2.2/audit
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ audit
2024-09-20.16:00:41 zfs set readonly=on freenas-boot/ROOT/24.04.2.2/conf
2024-09-20.16:00:41 zfs snapshot freenas-boot/ROOT/24.04.2.2/conf@pristine
2024-09-20.16:00:41 zfs set mountpoint=/conf freenas-boot/ROOT/24.04.2.2/conf
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ conf
2024-09-20.16:00:41 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/data
2024-09-20.16:00:41 zfs set mountpoint=/data freenas-boot/ROOT/24.04.2.2/data
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ data
2024-09-20.16:00:41 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/mnt
2024-09-20.16:00:41 zfs set mountpoint=/mnt freenas-boot/ROOT/24.04.2.2/mnt
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ mnt
2024-09-20.16:00:41 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/etc
2024-09-20.16:00:41 zfs snapshot freenas-boot/ROOT/24.04.2.2/etc@pristine
2024-09-20.16:00:41 zfs set mountpoint=/etc freenas-boot/ROOT/24.04.2.2/etc
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ etc
2024-09-20.16:00:41 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/home
2024-09-20.16:00:41 zfs set mountpoint=/home freenas-boot/ROOT/24.04.2.2/home
2024-09-20.16:00:41 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ home
2024-09-20.16:00:42 zfs set readonly=on freenas-boot/ROOT/24.04.2.2/opt
2024-09-20.16:00:42 zfs snapshot freenas-boot/ROOT/24.04.2.2/opt@pristine
2024-09-20.16:00:42 zfs set mountpoint=/opt freenas-boot/ROOT/24.04.2.2/opt
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ opt
2024-09-20.16:00:42 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/root
2024-09-20.16:00:42 zfs set mountpoint=/root freenas-boot/ROOT/24.04.2.2/root
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ root
2024-09-20.16:00:42 zfs set readonly=on freenas-boot/ROOT/24.04.2.2/usr
2024-09-20.16:00:42 zfs snapshot freenas-boot/ROOT/24.04.2.2/usr@pristine
2024-09-20.16:00:42 zfs set mountpoint=/usr freenas-boot/ROOT/24.04.2.2/usr
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ usr
2024-09-20.16:00:42 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/var
2024-09-20.16:00:42 zfs snapshot freenas-boot/ROOT/24.04.2.2/var@pristine
2024-09-20.16:00:42 zfs set mountpoint=/var freenas-boot/ROOT/24.04.2.2/var
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ var
2024-09-20.16:00:42 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/var/ca-cert ificates
2024-09-20.16:00:42 zfs set mountpoint=/var/local/ca-certificates freenas-boot/R OOT/24.04.2.2/var/ca-certificates
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ var/ca-certificates
2024-09-20.16:00:42 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/var/log
2024-09-20.16:00:42 zfs set mountpoint=/var/log freenas-boot/ROOT/24.04.2.2/var/ log
2024-09-20.16:00:42 zfs set org.zectl:bootloader=“” freenas-boot/ROOT/24.04.2.2/ var/log
2024-09-20.16:00:42 zfs set readonly=on freenas-boot/ROOT/24.04.2.2
2024-09-20.16:00:42 zfs snapshot freenas-boot/ROOT/24.04.2.2@pristine
2024-09-20.16:00:42 zfs set org.zectl:bootloader=grub freenas-boot/ROOT
2024-09-20.16:02:10 zpool import -N -f freenas-boot
2024-09-20.16:02:26 zfs set readonly=off freenas-boot/ROOT/24.04.2.2
2024-09-20.16:02:26 zfs set readonly=off freenas-boot/ROOT/24.04.2.2/usr
2024-09-20.16:03:15 zfs set readonly=on freenas-boot/ROOT/24.04.2.2
2024-09-20.16:04:01 zpool import -N -f freenas-boot
2024-09-20.16:04:20 zpool set compatibility=grub2 freenas-boot
2024-09-20.16:04:40 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off - o snapdir=hidden -o xattr=sa freenas-boot/.system
2024-09-20.16:04:40 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off - o snapdir=hidden -o quota=1G -o xattr=sa freenas-boot/.system/cores
2024-09-20.16:04:40 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off - o snapdir=hidden -o xattr=sa freenas-boot/.system/samba4
2024-09-20.16:04:41 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off - o snapdir=hidden -o xattr=sa freenas-boot/.system/configs-d6d294a404024d4b9f621e 6bf1afa53c
2024-09-20.16:04:41 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off - o snapdir=hidden -o xattr=sa freenas-boot/.system/netdata-d6d294a404024d4b9f621e 6bf1afa53c
2024-09-20.16:06:20 zpool import -N -f freenas-boot
2024-09-20.16:08:01 py-libzfs: zfs snapshot freenas-boot/.system/samba4@wbc-172 6862881
2024-09-20.18:29:07 zpool import -N -f freenas-boot
2024-09-21.10:26:12 zpool import -N -f freenas-boot
2024-09-22.03:45:02 py-libzfs: zpool scrub freenas-boot
2024-09-22.10:41:49 zpool import -N -f freenas-boot

Concerningly, this is not the first time something like this has happened… you may want to take a look at these threads…

Cool thanks, I’ll read up!

Ok, still reading and learning…BUT… if anyone is interested, here appears to be the CLI input history …

midclt call pool.query|jq
midclt call pool.query|jq
zpool export
zpool export ?
midclt call pool.query|jq
zpool export -a
zpool export -f -a
zpool export [-f] -a
midclt call pool.query|jq
zpool export BLACKBOX01
zpool export /mnt/BLACKBOX01
zpool export mnt/BLACKBOX01
zpool export ‘/mnt/BLACKBOX01’
zpool import -F -n “BLACKBOX01”
zpool import -F -n “BLACKBOX01”
zpool import -F -n BLACKBOX01
zpool import BLACKBOX01
midclt call pool.query |jq
zpool import -f
zpool import BLACKBOX01
zpool import -f -R /mnt BLACKBOX01
sudo zpool Status Data
zpool help
zpool -list
zpool list
import freenas-boot
zpoool import freenas-boot
zpool import freenas-boot
zpool import
lsblk
zpool status
zpool status
midclt call pool.import_find
zpool import 119
midclt call job.query 119
midclt call job.query 119
midclt call job.query 119
midclt call job.query 119
zpool import 119
zpool import -f BLACKBOX01
sudo zpool import -a -f
sudo zpool import -a -f -n
sudo zpool import -a -f -N
sudo zpool import -a -f -c /data/zfs/zpool.cache -d /dev/disc/by-id/
suod zpool import -a -Fn
sudo zpool import -a -Fn
sudo zpool import -a -d /dev/disk/by-id/
midclt call job.query 119
zpool import -R /mnt/DATASE#T01
smartclt
sudo smartclt
sudo smartctl
sudo smartctl help
sudo smartctl .help
sudo smartctl -help
sudo smartctl -X
sudo smartctl -X “sdb”
sudo smartctl -help
sudo smartctl --scan
sudo smartctl -X /dev/sdb
sudo smartctl -X /dev/sdd
sudo smartctl -X /dev/sde
sudo smartctl -X /dev/sdf
sudo smartctl -X /dev/sdg
smartctl -a
smartctl -h
smartctl -a /dev/sdb
smartctl -a /dev/sdf
smartctl -a /dev/sde
smartctl -help
smartctl -H /
smartctl -H
smartctl -H /dev/dvf
smartctl -H /dev/sdg
smartctl -H /dev/sdf
smartctl -H /dev/sde
smartctl -H /dev/sdd
smartctl -H /dev/sdb
sudo zpool import
sudo zpool status
sudo gpart show
sudo glabel status
sudo zpool status -v
sudo lsblk
sudo zpool import
sudo zfs list
sudo zfs
sudo zfs list
zpool import -R /mnt -c /data/zfs/zpool.cache BLACKBOX01
zpool import -R /mnt Data
zpool import -R /mnt BLACKBOX01
sudo lsblk -f
sudo zpool import -f BLACKBOX01
sudo mc
mc
sudo zpool insert
sudo zpool
sudo zpool help
sudo zpool attach
sudo zpool attach -a
sudo zpool attach help
zpool import
sudo zpool import -a
sudo zpool status
sudo zpool info
sudo zpool import -f -a
sudo lsblk -f
sudo lsblk help
sudo lsblk
help
.help
?
sudo dmidcode
dmidecode
uname
uname -m
lshw
sudo lshw
sudo lshw
sudo zpool info
sudo zpool list
sudo zpool list
sudo lsblk
sudo lshw
lshw
sudo lshw
hwinfo
sudo hwinfo
sudo lsblk
sudo zpool import
sudo zpool status
sudo zpool history
sudo zpool history
sudo zpool history
mc
help
sudo lshw
sudo slblk
sudo lsblk
sudo sfdisk help
sudo sfdisk
sudo sfdisk .help
sudo sfdisk.help
sudo sfdisk -d sda
sudo sfdisk -d /dev/sda
sudo sfdisk -d /dev/sdb
sudo sfdisk -d /dev/sdd
sudo sfdisk -d /dev/sdf
sudo sfdisk -d /dev/sdg
zdb -l /dev/sdga
zdb -l /dev/sda
sudo zdb -l /dev/sda
sudo zdb -l sda
sudo zdb -l sdb
sudo zdb -l sdc
sudo zdb -l sdd
zdb sda
zdb /dev/sda

Any help appreciated; otherwise, I’ll keep digging