In an SSH shell, what does this show:
zpool import -d /dev/gptid
gpart show
You’ll have to run the commands as “root” or with “sudo”.
In an SSH shell, what does this show:
zpool import -d /dev/gptid
gpart show
You’ll have to run the commands as “root” or with “sudo”.
zpool import -d /dev/gptid
pool: Pool2
id: 14223652181009904092
state: ONLINE
status: One or more devices were being resilvered.
action: The pool can be imported using its name or numeric identifier.
config:
Pool2 ONLINE
mirror-0 ONLINE
gptid/d8c9a8d5-ffd5-11ec-b16d-002590d59089 ONLINE
gptid/d8dba0c2-ffd5-11ec-b16d-002590d59089 ONLINE
pool: Pool1
id: 7263808314950553810
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
Pool1 ONLINE
mirror-0 ONLINE
gptid/49fc0b49-2885-11e7-9614-002590d59089 ONLINE
gptid/4add9034-2885-11e7-9614-002590d59089 ONLINE
# gpart show
=> 40 234441568 ada2 GPT (112G)
40 1024 1 freebsd-boot (512K)
1064 33554432 3 freebsd-swap (16G)
33555496 200867840 2 freebsd-zfs (96G)
234423336 18272 - free - (8.9M)
=> 34 3907029101 ada3 GPT (1.8T)
34 94 - free - (47K)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5K)
=> 34 3907029101 ada4 GPT (1.8T)
34 94 - free - (47K)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5K)
=> 40 23437770679 ada0 GPT (11T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 23433576280 2 freebsd-zfs (11T)
23437770712 7 - free - (3.5K)
=> 40 23437770679 ada1 GPT (11T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 23433576280 2 freebsd-zfs (11T)
23437770712 7 - free - (3.5K)
As it stands now, you cannot import Pool2 using the GUI, even in Core 13.3?
I haven’t tried yet. I am not sure how to best try that.
As you would normally import any pool.
Storage → Pools → Add → choose “Import an existing pool”
No luck
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 352, in import_pool
self.logger.error(
File "libzfs.pyx", line 529, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 346, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1375, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1403, in libzfs.ZFS.__import_pool
libzfs.ZFSException: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
"""
Error importing pool
cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1466, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1285, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1175, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1158, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
To rule out the GUI:
zpool import -d /dev/gptid -R /mnt 14223652181009904092
cannot import ‘Pool2’: one or more devices is currently unavailable
UPDATE: See my updated post.
I would message @HoneyBadger and point him to this thread.
He might ask of you to “dd” 32MB of your disk’s header. Be very careful when using “dd”, as even the smallest mistake, misplaced “space”, or wrong order of the syntax can destroy precious data.
I don’t feel comfortable to lead you into trying emergency imports that require a lower level knowledge.
To summarize:
Is that mostly right?
Yes, I was previously using Core 13.0 U6.1 until I got the endless boot loop
Wait. Wait. Wait. I missed something the first time.
The TXG’s are different between the two ZFS members:
sdc2:
txg: 17236783
sdd2:
txg: 17236778
@HoneyBadger, is it safer and simpler to import the pool in a “degraded” state, using only sdc2
. Then later determine whether to resilver or replace the other member of the mirror afterwards?
I would almost say that importing with the older txg: 17236778
on sdd2
would be preferable in case the reason for failed import is in one of the newer 5 transactions.
(I also believe that the default behavior is to search three transactions backwards for matching ones - which would put them out of sync in this way.)
OK, would go about that in the webui Import Disk and use the equivalent disk to sdd (In Core 13.3 U1.2 it is ada1 based on the serial number of the disk, the sdc is ada0)
In the Import Disk I can select “ada1p2”
I don’t believe this is possible with the web UI. I think you need to do it in the command-line.
I would have to defer to @HoneyBadger on how to go about this.
Do not touch this option. It has nothing to do with importing ZFS pools.
That’s why I asked!
I will wait to get the commands for my best options
It will probably be tomorrow morning before I will have time to do this
I also have questions about importing a degraded pool and the steps following that getting the pool back to the full mirror
I would physically detach the other one of the drives - since you have a mirror, after all - and just try to import the pool with the single vdev.
OK, I’ll try that tomorrow morning.
Use webui or command line?
I think you might need to use command line with the -f
(lowercase F) flag to tell it to import it if it was last used by a different system (your previous SCALE install vs the now-CORE version)
Adding to this, make sure you unplug the disk with the later TXG of 17236783
.
The “Disks” page should tell you the serial number of each disk, which you can match against “adaN”. Your physical drives should have labels on them with the serial numbers.
OK, I have disabled the ada0 (previous sdc)
I assume you want me to try to import into Core 13.3
Is this the correct command to import: zpool import -fd /dev/gptid -R /mnt 14223652181009904092