TrueNas Scale went pooptie following PowerOutage

Woke up to this nightmare… this morning my data had all been exported by Truenas Scale by itself.

now i re-installed Truenas Scale and attempted import and didn’t get very far.

wonder if anyone can help.

before i go jump off a bridge,

error below

concurrent.futures.process.RemoteTraceback:
“”"
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool_actions.py", line 231, in import_pool
zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host)
File “libzfs.pyx”, line 1374, in libzfs.ZFS.import_pool
File “libzfs.pyx”, line 1402, in libzfs.ZFS.__import_pool
libzfs.ZFSException: cannot import ‘VMData’ as ‘VMData’: I/O error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/lib/python3.11/concurrent/futures/process.py”, line 261, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 112, in main_worker
res = MIDDLEWARE._run(*call_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 34, in call
with Client(f’ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock’, py_exceptions=True) as c:
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 40, in call
return methodobj(*params)
^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 183, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool_actions.py", line 211, in import_pool
with libzfs.ZFS() as zfs:
File “libzfs.pyx”, line 534, in libzfs.ZFS.exit
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool_actions.py", line 235, in import_pool
raise CallError(f’Failed to import {pool_name!r} pool: {e}', e.code)
middlewared.service_exception.CallError: [EZFS_IO] Failed to import ‘VMData’ pool: cannot import ‘VMData’ as ‘VMData’: I/O error
“”"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 509, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 554, in _run_body
rv = await self.method(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 179, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 49, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/import_pool.py", line 114, in import_pool
await self.middleware.call(‘zfs.pool.import_pool’, guid, opts, any_host, use_cachefile, new_name)
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1629, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1468, in _call
return await self._call_worker(name, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1474, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1380, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1364, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EZFS_IO] Failed to import ‘VMData’ pool: cannot import ‘VMData’ as ‘VMData’: I/O error

By any chance do you happen to have a backup of “exported configuration” of your original install? Pardon me for asking this if you have already considered this, but at times like these, I have seen its easy for someone to not remember of such steps in panic.

Please open a command line prompt and run the following commands, posting the output from each in a separate </> window:

  • lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
  • sudo zpool status -v
  • sudo zpool import
  • lspci
  • sudo sas2flash -list
  • sudo sas3flash -list
  • sudo storcli show all
  • sudo zpool import VMData
1 Like

lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID

NAME MODEL ROTA PTTYPE TYPE START SIZE PARTTYPENAME PARTUUID
sda PERC H710 1 gpt disk 7864890425344
└─sda1 1 gpt part 4096 7864887280128 Solaris /usr & Apple ZFS 0fd564df-31f6-4204-b4e8-e73fac330328
sdb PERC H710 1 gpt disk 7864890425344
└─sdb1 1 gpt part 4096 7864887280128 Solaris /usr & Apple ZFS fd7bc58a-655c-41b3-a858-f14610d6e940
sdc PERC H710 1 gpt disk 7864890425344
└─sdc1 1 gpt part 4096 7864887280128 Solaris /usr & Apple ZFS f4af5b39-ebf0-44a8-b567-06d396de73b7
sdd PERC H710 1 gpt disk 8000987201536
└─sdd1 1 gpt part 4096 8000984056320 Solaris /usr & Apple ZFS d6472040-9846-4eea-8c12-79f4f10fcc48
sde PERC H710 1 gpt disk 7864890425344
└─sde1 1 gpt part 4096 7864887280128 Solaris /usr & Apple ZFS 1e681acd-adb7-4fd2-ab76-fb78ba2eaecf
sdf PERC H710 1 gpt disk 8000987201536
└─sdf1 1 gpt part 4096 8000984056320 Solaris /usr & Apple ZFS d0e34a3a-b02e-420d-b290-c78ec269f942
sdg PERC H710 1 gpt disk 8000987201536
└─sdg1 1 gpt part 4096 8000984056320 Solaris /usr & Apple ZFS c3c2f8f0-2c2d-4a9a-8a19-81c325514a7d
sdh PERC H710 1 gpt disk 8000987201536
└─sdh1 1 gpt part 4096 8000984056320 Solaris /usr & Apple ZFS 39174e23-b052-405b-8b43-e83fa50c9be4
sdi ADATA SU800 0 gpt disk 512110190592
├─sdi1 0 gpt part 4096 1048576 BIOS boot 3d3bf1b5-e450-47b5-aa45-f8061ace491c
├─sdi2 0 gpt part 6144 536870912 EFI System e8bd980a-8266-4d03-a5a1-6c6bc4b2aea9
├─sdi3 0 gpt part 34609152 494390287872 Solaris /usr & Apple ZFS 35316899-b3dd-44a1-8f41-c4599a40114a
└─sdi4 0 gpt part 1054720 17179869184 Linux swap 826f632c-27f7-4013-8e41-7e86d47cf3a4
└─sdi4 0 crypt 17179869184
sr0 PLDS DVD-ROM DH-16D8S 1 rom 1073741312

sudo zpool status -v
pool: VMData2
state: ONLINE
scan: scrub repaired 0B in 00:00:21 with 0 errors on Sun Mar 30 00:00:23 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    VMData2                                   ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        0fd564df-31f6-4204-b4e8-e73fac330328  ONLINE       0     0     0
        f4af5b39-ebf0-44a8-b567-06d396de73b7  ONLINE       0     0     0
        fd7bc58a-655c-41b3-a858-f14610d6e940  ONLINE       0     0     0
        1e681acd-adb7-4fd2-ab76-fb78ba2eaecf  ONLINE       0     0     0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:53 with 0 errors on Sun Mar 30 03:45:55 2025
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool   ONLINE       0     0     0
      sdi3      ONLINE       0     0     0

sudo zpool import
pool: VMData
id: 11669357745879831993
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the ‘-f’ flag.

    VMData                                    FAULTED  corrupted data
      raidz1-0                                ONLINE
        d6472040-9846-4eea-8c12-79f4f10fcc48  ONLINE
        d0e34a3a-b02e-420d-b290-c78ec269f942  ONLINE
        c3c2f8f0-2c2d-4a9a-8a19-81c325514a7d  ONLINE
        39174e23-b052-405b-8b43-e83fa50c9be4  ONLINE

lspci
00:00.0 Host bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 DMI2 (rev 04)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 1a (rev 04)
00:01.1 PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 1b (rev 04)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 3a (rev 04)
00:05.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 VTd/Memory Map/Misc (rev 04)
00:05.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 IIO RAS (rev 04)
00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Virtual Root Port (rev 05)
00:16.0 Communication controller: Intel Corporation C600/X79 series chipset MEI Controller #1 (rev 05)
00:16.1 Communication controller: Intel Corporation C600/X79 series chipset MEI Controller #2 (rev 05)
00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 05)
00:1c.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 1 (rev b5)
00:1c.4 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 5 (rev b5)
00:1c.6 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 7 (rev b5)
00:1c.7 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 8 (rev b5)
00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C600/X79 series chipset LPC Controller (rev 05)
00:1f.2 IDE interface: Intel Corporation C600/X79 series chipset 4-Port SATA IDE Controller (rev 05)
00:1f.5 IDE interface: Intel Corporation C600/X79 series chipset 2-Port SATA IDE Controller (rev 05)
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.0 PCI bridge: Renesas Technology Corp. SH7757 PCIe Switch [PS]
04:00.0 PCI bridge: Renesas Technology Corp. SH7757 PCIe Switch [PS]
04:01.0 PCI bridge: Renesas Technology Corp. SH7757 PCIe Switch [PS]
05:00.0 PCI bridge: Renesas Technology Corp. SH7757 PCIe-PCI Bridge [PPB]
06:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. G200eR2
08:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 2208 [Thunderbolt] (rev 05)
3f:08.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 QPI Link 0 (rev 04)
3f:09.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 QPI Link 1 (rev 04)
3f:0a.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Power Control Unit 0 (rev 04)
3f:0a.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Power Control Unit 1 (rev 04)
3f:0a.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Power Control Unit 2 (rev 04)
3f:0a.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Power Control Unit 3 (rev 04)
3f:0b.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 UBOX Registers (rev 04)
3f:0b.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 UBOX Registers (rev 04)
3f:0c.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0c.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0c.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0c.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0c.4 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0d.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0d.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0d.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0d.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0d.4 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Unicast Registers (rev 04)
3f:0e.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Home Agent 0 (rev 04)
3f:0e.1 Performance counters: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Home Agent 0 (rev 04)
3f:0f.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 Target Address/Thermal Registers (rev 04)
3f:0f.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 RAS Registers (rev 04)
3f:0f.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder Registers (rev 04)
3f:0f.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder Registers (rev 04)
3f:0f.4 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder Registers (rev 04)
3f:0f.5 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder Registers (rev 04)
3f:10.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 Thermal Control 0 (rev 04)
3f:10.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 Thermal Control 1 (rev 04)
3f:10.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 ERROR Registers 0 (rev 04)
3f:10.3 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 ERROR Registers 1 (rev 04)
3f:10.4 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 Thermal Control 2 (rev 04)
3f:10.5 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 Thermal Control 3 (rev 04)
3f:10.7 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Integrated Memory Controller 1 Channel 0-3 ERROR Registers 3 (rev 04)
3f:13.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 R2PCIe (rev 04)
3f:13.1 Performance counters: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 R2PCIe (rev 04)
3f:13.4 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 QPI Ring Registers (rev 04)
3f:13.5 Performance counters: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 QPI Ring Performance Ring Monitoring (rev 04)
3f:16.0 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 System Address Decoder (rev 04)
3f:16.1 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Broadcast Registers (rev 04)
3f:16.2 System peripheral: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 Broadcast Registers (rev 04)

sudo sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

    No LSI SAS adapters found! Limited Command Set Available!
    ERROR: Command Not allowed without an adapter!
    ERROR: Couldn't Create Command -list
    Exiting Program.

sudo sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02)
Copyright 2008-2017 Avago Technologies. All rights reserved.

    No Avago SAS adapters found! Limited Command Set Available!
    ERROR: Command Not allowed without an adapter!
    ERROR: Couldn't Create Command -list
    Exiting Program.

sudo storcli show all
CLI Version = 007.1504.0000.0000 June 22, 2020
Operating system = Linux 6.6.32-production+truenas
Status Code = 0
Status = Success
Description = None

Number of Controllers = 0
Host Name = truenasvm
Operating System = Linux 6.6.32-production+truenas

sudo zpool import VMdata
cannot import ‘VMdata’: no such pool available

A cursory look seems to show that the VMData pool might be using RAID controller disks.

That configuration is known to cause corruption of the pool like you are seeing

Perhaps some of the more extreme pool rollback import recovery options could help.

I agree with @stux - It looks like you are using a PERC RAID card base on a LSI MegaRAID SAS 2208 technology and AFAIK you cannot flash this to IT Mode which is an absolute requirement to use safely with ZFS. We can see that you have at least tried to define the disks as single-disks in the firmware, but this is simply not enough.

(In short, there are two reasons that RAID cards are bad for ZFS: 1) ZFS needs native access to drive details like serial number; and 2) ZFS relies absolutely on the disk controller NOT resequencing I/Os and RAID cards tend to resequence I/Os in order to reduce seek times. And this removes ZFS ability to ensure consistency and can lead to metadata corruption when one I/O succeeds but one which should have been written earlier fails.)

We are assuming that you are not running under Proxmox - if you are then please say.

Can you please rerun / run the following commands (without changing the character case) and post the results in </> boxes:

  • sudo zpool import -R /mnt VMData

and if that fails (which it almost certainly will):

  • sudo zpool import -f -R /mnt VMData

Thanks.

Edit: Obviously the current priority is to try to get the pool back online for you. But to prevent the same thing happening again in the future apparently you can flash it to IT firmware though this looks a bit complicated - but you will likely need to recreate the pool and restore it from backups later - see Fohdeesha - PERC Crossflash - H710 IT Crossflashing.

I would first add the -n flag (dry run) for safety.

1 Like

As far as I have seen ZFS is ultra cautious about importing a pool even with -f and -n isn’t that necessary - the pool will either import properly or it won’t. But if you have experiences that say otherwise I will definitely agree with you.

However once you start using -F and other flags to roll back transactions I would agree that -n is absolutely necessary to see what would happen before you do anything that changes the pools.

1 Like

thank you here it is

admin@truenasvm[~]$ sudo zpool import -R /mnt VMData
[sudo] password for admin:
cannot import ‘VMData’: I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Sun Mar 30 06:29:45 2025
should correct the problem. Approximately 10 seconds of data
must be discarded, irreversibly. Recovery can be attempted
by executing ‘zpool import -F VMData’. A scrub of the pool
is strongly recommended after recovery.
admin@truenasvm[~]$

admin@truenasvm[~]$ sudo zpool import -f -R /mnt VMData
cannot import ‘VMData’: I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Sun Mar 30 06:29:45 2025
should correct the problem. Approximately 10 seconds of data
must be discarded, irreversibly. Recovery can be attempted
by executing ‘zpool import -F VMData’. A scrub of the pool
is strongly recommended after recovery.
admin@truenasvm[~]$

sudo zpool import -F /mnt VMData

resulted in

Error: concurrent.futures.process.RemoteTraceback:
“”"
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/dataset_quota.py", line 76, in get_quota
with libzfs.ZFS() as zfs:
File “libzfs.pyx”, line 529, in libzfs.ZFS.exit
File “/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py”, line 78, in get_quota
quotas = resource.userspace(quota_props)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “libzfs.pyx”, line 3680, in libzfs.ZFSResource.userspace
libzfs.ZFSException: cannot get used/quota for VMData: dataset is busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/lib/python3.11/concurrent/futures/process.py”, line 256, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 112, in main_worker
res = MIDDLEWARE._run(*call_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 34, in _call
with Client(f’ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock’, py_exceptions=True) as c:
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 40, in call
return methodobj(*params)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/dataset_quota.py", line 80, in get_quota
raise CallError(f’Failed retreiving {quota_type} quotas for {ds}')
middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for VMData
“”"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 198, in call_method
result = await self.middleware.call_with_audit(message[‘method’], serviceobj, methodobj, params, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1466, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1417, in call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 187, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/dataset_quota_and_perms.py", line 225, in get_quota
quota_list = await self.middleware.call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1564, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1425, in _call
return await self._call_worker(name, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1431, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1337, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1321, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for VMData

Which bit about @etorix and I agreeing that you should definitely use the -n flag for -F didn’t you understand?

And why did you do -F /mnt and not -F -R /mnt?

I have no idea what the current state is or what circumstances the UI exception you listed occurred but it is reasonably likely that your pool is now in a worse state than it was before.

Perhaps you could explain what happened after you ran zpool import -F VMData that ended in the UI giving the exception that you posted, and run the following commands again (this time posting the output in </> boxes as previously requested) to update up on the current situation:

  • lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
  • sudo zpool status -v
  • sudo zpool import
admin@truenasvm[~]$ sudo zpool import -R /mnt VMData
[sudo] password for admin: 
cannot import 'VMData': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
admin@truenasvm[~]$ lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME     MODEL                 ROTA PTTYPE TYPE     START          SIZE PARTTYPENAME             PARTUUID
sda      PERC H710                1 gpt    disk           7864890425344                          
└─sda1                            1 gpt    part      4096 7864887280128 Solaris /usr & Apple ZFS 0fd564df-31f6-4204-b4e8-e73fac330328
sdb      PERC H710                1 gpt    disk           7864890425344                          
└─sdb1                            1 gpt    part      4096 7864887280128 Solaris /usr & Apple ZFS 1e681acd-adb7-4fd2-ab76-fb78ba2eaecf
sdc      PERC H710                1 gpt    disk           7864890425344                          
└─sdc1                            1 gpt    part      4096 7864887280128 Solaris /usr & Apple ZFS fd7bc58a-655c-41b3-a858-f14610d6e940
sdd      PERC H710                1 gpt    disk           7864890425344                          
└─sdd1                            1 gpt    part      4096 7864887280128 Solaris /usr & Apple ZFS f4af5b39-ebf0-44a8-b567-06d396de73b7
sde      PERC H710                1 gpt    disk           8000987201536                          
└─sde1                            1 gpt    part      4096 8000984056320 Solaris /usr & Apple ZFS d6472040-9846-4eea-8c12-79f4f10fcc48
sdf      PERC H710                1 gpt    disk           8000987201536                          
└─sdf1                            1 gpt    part      4096 8000984056320 Solaris /usr & Apple ZFS d0e34a3a-b02e-420d-b290-c78ec269f942
sdg      ADATA SU800              0 gpt    disk            512110190592                          
├─sdg1                            0 gpt    part      4096       1048576 BIOS boot                3d3bf1b5-e450-47b5-aa45-f8061ace491c
├─sdg2                            0 gpt    part      6144     536870912 EFI System               e8bd980a-8266-4d03-a5a1-6c6bc4b2aea9
├─sdg3                            0 gpt    part  34609152  494390287872 Solaris /usr & Apple ZFS 35316899-b3dd-44a1-8f41-c4599a40114a
└─sdg4                            0 gpt    part   1054720   17179869184 Linux swap               826f632c-27f7-4013-8e41-7e86d47cf3a4
  └─sdg4                          0        crypt            17179869184                          
sdh      PERC H710                1 gpt    disk           8000987201536                          
└─sdh1                            1 gpt    part      4096 8000984056320 Solaris /usr & Apple ZFS 39174e23-b052-405b-8b43-e83fa50c9be4
sdi      PERC H710                1 gpt    disk           8000987201536                          
└─sdi1                            1 gpt    part      4096 8000984056320 Solaris /usr & Apple ZFS c3c2f8f0-2c2d-4a9a-8a19-81c325514a7d
sr0      PLDS DVD-ROM DH-16D8S    1        rom               1073741312                          
admin@truenasvm[~]$ 

admin@truenasvm[~]$ sudo zpool status -v
pool: VMData
state: ONLINE
scan: scrub repaired 0B in 05:09:15 with 0 errors on Sun Mar 9 06:09:17 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    VMData                                    ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        d6472040-9846-4eea-8c12-79f4f10fcc48  ONLINE       0     0     0
        d0e34a3a-b02e-420d-b290-c78ec269f942  ONLINE       0     0     0
        c3c2f8f0-2c2d-4a9a-8a19-81c325514a7d  ONLINE       0     0     0
        39174e23-b052-405b-8b43-e83fa50c9be4  ONLINE       0     0     0

errors: No known data errors

pool: VMData2
state: ONLINE
scan: scrub repaired 0B in 00:00:21 with 0 errors on Sun Mar 30 00:00:23 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    VMData2                                   ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        0fd564df-31f6-4204-b4e8-e73fac330328  ONLINE       0     0     0
        f4af5b39-ebf0-44a8-b567-06d396de73b7  ONLINE       0     0     0
        fd7bc58a-655c-41b3-a858-f14610d6e940  ONLINE       0     0     0
        1e681acd-adb7-4fd2-ab76-fb78ba2eaecf  ONLINE       0     0     0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:01:10 with 0 errors on Sun Apr 6 03:46:11 2025
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool   ONLINE       0     0     0
      sdg3      ONLINE       0     0     0
admin@truenasvm[~]$ sudo zpool import
no pools available to import
admin@truenasvm[~]$ 

im sorry i am new to the forum… and its posting rules.

And why did you do -F /mnt and not -F -R /mnt

i did try the -F -R /mnt and it resulted in an error but i accidently closed the window. while attempting to cut and paste.

all i recall was an error in drives are in read only mode. my apologies.

So - it seems like your pool is online and no sign that it is read-only.

Can you see all your data? Can you write new files to the VMData pool?

If so, then I would imagine that the next thing to do is to run a scrub to check that the pool still has full integrity.

But please, pretty please, STOP running commands that you have not been advised to run (or worse still that you have been explicitly advised not to run) - like the zpool import command that you apparently reran despite it not being asked for - because it increases the risk that you do something that screws up your pool to the point that it cannot be recovered.

P.S. Assuming that your pool is mounted correctly and is read-write and that it has no errors on the scrub, then you need to start thinking about how you are going to switch away from hardware RAID controller - which will likely mean moving the data off, upgrading or replacing the RAID card to IT firmware, rebuilding the pool afresh and reloading the data.

You may also want to think about whether this is an opportunity to switch to mirrors for your VMData (we don’t have enough information to advise whether this would be beneficial or not - but generally speaking VM virtual disks should be on mirrors) and / or to switch to a single VMData pool rather than two separate pools. Also you could also consider whether a broader change of design is needed e.g. not holding as much data in virtual disks and / or the need for SSDs and SLOG - and the community can advise on the most performant design if you share details of what is on your existing pools.

It’s just a matter of readability with large blocks of text from the terminal.

More specifically, zvols do much better on mirrors (small transactions) while raidz is well suited to storing large data files. So the idea would be to reduce VMs to the OS part and move the data that is handled by the VM to SMB or NFS shares mounted into the VM.
This also makes it easier to backup the data.

Thanks to @etorix for clarifying further. But I should add that “large files” in this context is anything above 128KB (and perhaps smaller than this) - they don’t have to be multi-GB for RAIDZ to be good.

I would just like to take the time to thank everyone that has posted some advice on this failure and ideas on how to recover if possible.

And the suggestions on moving forward. At this point i am aware that the data may already be gone. but from what i can see with my limite skills the data is still occupying space but it could be a load of garbage.

So i see no option to import the pool so i can presume its mounted. however i still have no access to the data and some errors

And yes i will switch to a better configuration for the future.

So enclosed is the errors i see in the TrueNas Scale Panel.

And thank you all again for all your help.

You don’t need to give up yet - if the pool is imported that doesn’t mean that the datasets are each mounted in the correct place or indeed mounted at all.

Please run the following commands in this exact order to see if it fixes it:

  • sudo zfs list VMData
  • sudo zpool export VMData
  • sudo zpool import -R /mnt VMData
  • sudo zfs list VMData
1 Like

Importing and mounting are two different things.
At this point I have no reason to assume it’s mounted.

Running zfs list VMData will show the current status.

What @Protopia said.

Thank you

here are the results…

admin@truenasvm[~]$ sudo zfs list VMData
[sudo] password for admin:
NAME USED AVAIL REFER MOUNTPOINT
VMData 9.12T 11.9T 140K /VMData
admin@truenasvm[~]$

admin@truenasvm[~]$ sudo zpool export VMData
admin@truenasvm[~]$ No Error

admin@truenasvm[~]$ sudo zpool import -R /mnt VMData
admin@truenasvm[~]$ No Error

admin@truenasvm[~]$ sudo zfs list VMData
NAME USED AVAIL REFER MOUNTPOINT
VMData 9.12T 11.9T 140K /mnt/VMData
admin@truenasvm[~]$