I’m running into an issue importing a ZFS array from an old server. The pool was created using the TrueNAS Core web ui and was a 2x4TB drive HDD mirror. I am now trying to import the same exact pool into my new server and I’m running into some weird issues.
Some information:
- Neither server used an HBA (both drives were connected directly to the motherboard SATA ports)
- The old server and the new hosts are both running TrueNAS Core (new server is running TrueNAS-13.0-U6.4 but I am not quite sure what the old server was running)
- The drives were not dropped or damaged and the system was always powered down properly and was plugged in behind a UPS.
I initially tried to import the pool through the TrueNAS web ui but my pool does not show up under the “Storage > Pools > Add > Import an existing pool” section and as such I tried importing the pool through the command line like so:
root@truenas[~]# zpool import
pool: dynamicduo
id: 17224027113737128878
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
dynamicduo UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
gptid/312db3d7-b4a5-11ed-8bae-a036bcda188c UNAVAIL invalid label
gptid/312db3dd-b4a5-11ed-8bae-a036bcda188c UNAVAIL invalid label
Trying to import the pool by name looked more promising:
root@truenas[~]# zpool import dynamicduo
cannot import 'dynamicduo': pool was previously in use from another system.
Last accessed by linbo (hostid=348b95e7) at Wed Dec 31 16:00:00 1969
The pool can be imported, use 'zpool import -f' to import the pool.
But for some reason it says that my pool has an invalid vdev configuration
root@truenas[~]# zpool import dynamicduo -f
cannot import 'dynamicduo': invalid vdev configuration
Now my first thought (assuming I’m not misunderstanding the initial “invalid label” message) was that maybe my drives have somehow changed their IDs and so I checked the output of “gpart list” like shown below and I see that there are entries that share the previously seen “312db3d7-b4a5-11ed-8bae-a036bcda188c” gpt id label.
root@truenas[~]# gpart list
Geom name: nvd0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 976773127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: nvd0p1
Mediasize: 272629760 (260M)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
efimedia: HD(1,GPT,0890782f-c581-11ef-877c-9c6b0017c392,0x28,0x82000)
rawuuid: 0890782f-c581-11ef-877c-9c6b0017c392
rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
label: (null)
length: 272629760
offset: 20480
type: efi
index: 1
end: 532519
start: 40
2. Name: nvd0p2
Mediasize: 499826819072 (466G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 272650240
Mode: r1w1e1
efimedia: HD(2,GPT,089427d1-c581-11ef-877c-9c6b0017c392,0x82028,0x3a300000)
rawuuid: 089427d1-c581-11ef-877c-9c6b0017c392
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 499826819072
offset: 272650240
type: freebsd-zfs
index: 2
end: 976756775
start: 532520
Consumers:
1. Name: nvd0
Mediasize: 500107862016 (466G)
Sectorsize: 512
Mode: r1w1e2
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
Mediasize: 134217728 (128M)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(1,GPT,312db3dd-b4a5-11ed-8bae-a036bcda188c,0x800,0x40000)
rawuuid: 312db3dd-b4a5-11ed-8bae-a036bcda188c
rawtype: e3c9e316-0b5c-4db8-817d-f92df00215ae
label: (null)
length: 134217728
offset: 1048576
type: ms-reserved
index: 1
end: 264191
start: 2048
2. Name: ada0p2
Mediasize: 4000650887168 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(2,GPT,312db3e0-b4a5-11ed-8bae-a036bcda188c,0x40800,0x1d1bcb000)
rawuuid: 312db3e0-b4a5-11ed-8bae-a036bcda188c
rawtype: e75caf8f-f680-4cee-afa3-b001e56efc2d
label: Storage pool
length: 4000650887168
offset: 135266304
type: ms-spaces
index: 2
end: 7814035455
start: 264192
Consumers:
1. Name: ada0
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
Mediasize: 134217728 (128M)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(1,GPT,312db3d7-b4a5-11ed-8bae-a036bcda188c,0x800,0x40000)
rawuuid: 312db3d7-b4a5-11ed-8bae-a036bcda188c
rawtype: e3c9e316-0b5c-4db8-817d-f92df00215ae
label: (null)
length: 134217728
offset: 1048576
type: ms-reserved
index: 1
end: 264191
start: 2048
2. Name: ada1p2
Mediasize: 4000650887168 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(2,GPT,312db3df-b4a5-11ed-8bae-a036bcda188c,0x40800,0x1d1bcb000)
rawuuid: 312db3df-b4a5-11ed-8bae-a036bcda188c
rawtype: e75caf8f-f680-4cee-afa3-b001e56efc2d
label: Storage pool
length: 4000650887168
offset: 135266304
type: ms-spaces
index: 2
end: 7814035455
start: 264192
Consumers:
1. Name: ada1
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
Which further confuses me since I see that the type of one of the disks is “ms-spaces” (Microsoft Storage spaces) which is very weird since I think these drives were previously setup in a Microsoft Storage Spaces RAID1 but how is my ZFS “dynamicduo” pool name still somehow on the drives?
I’d appreciate any help, and I do apologize in advance for this headache inducing tier question but would someone be able to help me understand what’s going on here?