Trying to save my data, help appreciated

Hi all, I have a 3 x 12 TB Raid1 pool, where one of the drives started to do out. It was under warranty, so I offlined it and sent it back. While waiting for a replacement, a second drive started going. I knew this was bad, so I shut the NAS down while I waited. New drive came in, and I installed it, but the second drive had faulted and been taken offline, so I couldn’t rebuild the array. I took the 2nd drive out of the system, attached it and the new one to a linux system, and used ddrescue to copy the failing drive to the new drive. That copy was successful with no lost data. I then send the newly failed drive back for warranty, and shut the NAS down again. Second replacement drive came in and I was hopeful I could get things running again. At this point, TrueNAS wasn’t seeing the pool, and I tried several things (I didn’t record everything I did), but I was eventually able to get two older drives recognized, with an offline drive. I then replaced that offline drive with the new, and it began resilvering. I thought I was in good shape, but after resilvering, the pool showed degraded, with two online drives, and a replacing operation between old drive & new drive. I “detached” the old drive, and it finally showed all three drives online. It began another resilvering process that is still ongoing, but somewhere in all of this, the shares are broken. My windows boxes don’t see the share, and my linux boxes don’t show the data. I’d like to be able to access the data, back it up, then destroy and rebuild the pool, ut I’m having no luck doing that. Here’s my latest zpool status -v:

pool: truenas
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue May 20 19:42:05 2025
15.2T / 22.9T scanned at 305M/s, 14.0T / 22.9T issued at 281M/s
4.24T resilvered, 61.09% done, 09:14:15 to go
config:

    NAME                                      STATE     READ WRITE CKSUM
    truenas                                   ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        978db5c8-3453-4f1a-9968-2f2f8c0272e4  ONLINE       0     0  347K
        17225795-9cfc-4b8e-8a5f-93ff770b913b  ONLINE       0     0 1.23K  (resilvering)
        4cdbcdcc-4695-49c3-af5e-39436c292d94  ONLINE       0     0  347K

errors: Permanent errors have been detected in the following files:

    /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941/netdata-meta.db-wal
    /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941/dbengine/datafile-1-0000001278.ndf
    /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941/dbengine/journalfile-1-0000001279.njf
    truenas/tnas:/Backups/DellSvr/263251A58CFC3769-00-00.mrimg.tmp
    truenas/tnas:/Backups/DellSvr/WindowsImageBackup/DellSvr/Backup 2025-04-01 070009/15628938-8214-4a63-ae23-dd751e26e8ec.vhdx
    truenas/tnas:/.recycle/chad/Backups/DellSvr/EC7EE03565E62CF6-06-06.mrimg
    truenas/tnas:/Backups/ProxMox/images/101/vm-101-disk-0.raw
    truenas/tnas:/.recycle/chad/Backups/DellSvr/5BA883DED6C79217-18-18.mrimg
    truenas/tnas:/Backups/ProxMox/dump/vzdump-qemu-103-2025_04_05-01_08_51.vma.zst

From the share management console, the “/mnt/truenas/tnas” share shows, but if I try to edit it, the file browser doesn’t show it under /mnt/. Is ther a way to restore connection to the data so I can offload it?

If I try to disable the share in the control panel, I catch this error:

Error: Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 198, in call_method
result = await self.middleware.call_with_audit(message[‘method’], serviceobj, methodobj, params, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1466, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1417, in call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/service/sharing_service.py”, line 145, in update
rv = await super().update(app, audit_callback, id
, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/service/crud_service.py”, line 189, in update
return await self.middleware._call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1417, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/service/crud_service.py”, line 210, in nf
rv = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 47, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 187, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/smb.py”, line 1094, in do_update
verrors.check()
File “/usr/lib/python3/dist-packages/middlewared/service_exception.py”, line 70, in check
raise self
middlewared.service_exception.ValidationErrors: [EINVAL] sharingsmb_update.path_local: Path does not exist.

So I wasn’t able to ever get TrueNAS to stop resilvering the pool, and the dataset seemed to be corrupted, but using the shell, I found all my data in the /mnt directory. I was able to add a dataset on another pool that pointed to my data. Shares to that dataset worked properly, and I was able to copy my data off. Now I’m going to research destroying the pool and reconfiguring it in a more optimal scheme.