We have a pool of a big bunch of data with a lot of snapshots (would also be open to clearing some of those if it could help) in a hard disk encasement of 8x16TB disks organised in a raidz2. We changed the connection of that encasement from USB to eSATA in hopes of increasing speeds beyond meager 40MB/s. Instead, zfs started a resilvering which progressed for a few minutes and now has been stuck for a good 4 hours at the same scanned/issued count!
We also have a traditional raid array in an identical enclosure with disks half the size, which function smoothly after the connector change.
$ sudo zpool status -xv
pool: b
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Oct 4 18:15:06 2024
33.7G scanned at 2.19M/s, 18.8G issued at 1.23M/s, 95.8T total
0B resilvered, 0.02% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
b ONLINE 0 0 0
raidz1-0 ONLINE 0 12 0
e654fda2-1bab-4dd7-8941-27b7c5399456 ONLINE 3 6 0
c3e92d12-8b7d-475c-ac04-2afd4887b551 ONLINE 3 20 0
89e8070b-c005-4d31-9159-c2368ffd4be3 ONLINE 3 6 0
5b7157fc-7996-4c6b-9600-2b3d93b90bd9 ONLINE 3 22 0
3e08e841-46a5-41b2-92d0-13a47c81d6d5 ONLINE 3 6 0
e961fda4-7c66-4d3c-9fee-d3346908da32 ONLINE 3 6 0
18f61b50-172e-4a17-b0ca-51e219491a8d ONLINE 3 6 2
8ab72240-b593-4094-be04-ab8482f30414 ONLINE 3 6 0
errors: List of errors unavailable: pool I/O is currently suspended
$ sudo zpool iostat -v 1 2
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------- ----- ----- ----- ----- ----- -----
b 95.8T 35.1T 5 0 40.3K 3.50K
raidz1-0 95.8T 35.1T 5 0 40.3K 3.50K
e654fda2-1bab-4dd7-8941-27b7c5399456 - - 0 0 5.09K 459
c3e92d12-8b7d-475c-ac04-2afd4887b551 - - 0 0 4.98K 447
89e8070b-c005-4d31-9159-c2368ffd4be3 - - 0 0 5.02K 456
5b7157fc-7996-4c6b-9600-2b3d93b90bd9 - - 0 0 4.87K 437
3e08e841-46a5-41b2-92d0-13a47c81d6d5 - - 0 0 5.86K 453
e961fda4-7c66-4d3c-9fee-d3346908da32 - - 0 0 5.88K 436
18f61b50-172e-4a17-b0ca-51e219491a8d - - 0 0 104 457
8ab72240-b593-4094-be04-ab8482f30414 - - 1 0 8.53K 438
---------------------------------------- ----- ----- ----- ----- ----- -----
boot-pool 43.1G 74.9G 0 17 10.7K 1.85M
nvme1n1p3 43.1G 74.9G 0 17 10.7K 1.85M
---------------------------------------- ----- ----- ----- ----- ----- -----
eagle 219G 2.51T 0 0 699 1.14K
raidz1-0 219G 2.51T 0 0 699 1.14K
9c2a35d3-9d6e-45b3-aacc-25232bfb7909 - - 0 0 228 390
636f3146-83f0-4994-9d86-1dfd36950ff0 - - 0 0 245 390
6f2e5f52-c830-428e-8248-09377743ed57 - - 0 0 225 391
---------------------------------------- ----- ----- ----- ----- ----- -----
nova 325G 7.12T 0 0 789 1.21K
raidz1-0 325G 7.12T 0 0 789 1.21K
925a0ab3-9dda-488f-9497-10d1e210e213 - - 0 0 195 314
14a8c6ff-de7c-4f41-a1f4-a759a6cb074b - - 0 0 198 311
f1e680a4-1184-493a-bb2f-da6bcebf0246 - - 0 0 198 307
f4eef501-8214-40a9-be25-03a5b4c918e7 - - 0 0 196 305
---------------------------------------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------- ----- ----- ----- ----- ----- -----
b 95.8T 35.1T 0 0 0 0
raidz1-0 95.8T 35.1T 0 0 0 0
e654fda2-1bab-4dd7-8941-27b7c5399456 - - 0 0 0 0
c3e92d12-8b7d-475c-ac04-2afd4887b551 - - 0 0 0 0
89e8070b-c005-4d31-9159-c2368ffd4be3 - - 0 0 0 0
5b7157fc-7996-4c6b-9600-2b3d93b90bd9 - - 0 0 0 0
3e08e841-46a5-41b2-92d0-13a47c81d6d5 - - 0 0 0 0
e961fda4-7c66-4d3c-9fee-d3346908da32 - - 0 0 0 0
18f61b50-172e-4a17-b0ca-51e219491a8d - - 0 0 0 0
8ab72240-b593-4094-be04-ab8482f30414 - - 0 0 0 0
---------------------------------------- ----- ----- ----- ----- ----- -----
boot-pool 43.1G 74.9G 0 0 0 0
nvme1n1p3 43.1G 74.9G 0 0 0 0
---------------------------------------- ----- ----- ----- ----- ----- -----
eagle 219G 2.51T 0 0 0 0
raidz1-0 219G 2.51T 0 0 0 0
9c2a35d3-9d6e-45b3-aacc-25232bfb7909 - - 0 0 0 0
636f3146-83f0-4994-9d86-1dfd36950ff0 - - 0 0 0 0
6f2e5f52-c830-428e-8248-09377743ed57 - - 0 0 0 0
---------------------------------------- ----- ----- ----- ----- ----- -----
nova 325G 7.12T 0 0 0 0
raidz1-0 325G 7.12T 0 0 0 0
925a0ab3-9dda-488f-9497-10d1e210e213 - - 0 0 0 0
14a8c6ff-de7c-4f41-a1f4-a759a6cb074b - - 0 0 0 0
f1e680a4-1184-493a-bb2f-da6bcebf0246 - - 0 0 0 0
f4eef501-8214-40a9-be25-03a5b4c918e7 - - 0 0 0 0
---------------------------------------- ----- ----- ----- ----- ----- -----
What are sensible ways forward here?
Reconnect via USB?
Tweak ZFS Parameters?
Restart the system?
Try mounting the pool?
I scavenged the forums but did not really find out what to do.