I built my TrueNAS with three 8-TB drives. I had two more 8TB drives in my Synology. I copied all the data to the TrueNAS, then moved the two drives over.
I added the first one via Storage->Manage Devices->Extend
, and it seemed to get stuck waiting the the pool to expand. I eventually gave up and rebooted the system. The pool now shows the drive, but the capacity did not increase.
Now I am trying to add the second drive. When I do, I get this error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 515, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 560, in __run_body
rv = await self.method(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 174, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 48, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/attach_disk.py", line 84, in attach
await extend_job.wait(raise_error=True)
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 465, in wait
raise CallError(self.error)
middlewared.service_exception.CallError: [EFAULT] 2098 is not a valid Error
I have no idea what this means.
You don’t say which version of TrueNAS you are running.
And expansion takes a long time - you need to be patient.
Please run the following commands and post the output in </> boxes:
lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
sudo zpool status -v
Running 25.04.
# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE START SIZE PARTTYPENAME PARTUUID
sda ST8000VN004-3CP101 1 gpt disk 8001563222016
└─sda1 1 gpt part 2048 7999415386112 Solaris /usr & Apple ZFS 01ea666c-f3cd-492d-944c-9f21c465089a
sdb ST8000VN004-3CP101 1 gpt disk 8001563222016
└─sdb1 1 gpt part 2048 7999415386112 Solaris /usr & Apple ZFS 342423d7-61ba-4af0-aca3-75b1079282ed
sdc ST8000VN004-3CP101 1 gpt disk 8001563222016
└─sdc1 1 gpt part 2048 7999415386112 Solaris /usr & Apple ZFS c67844ad-af23-428d-af26-6a2fa7b66958
sdd ST8000VN004-2M2101 1 gpt disk 8001563222016
└─sdd1 1 gpt part 2048 7999415386112 Solaris /usr & Apple ZFS ab7f15a6-7d9f-4f06-8291-afc17a283c68
sde ST8000VN004-2M2101 1 gpt disk 8001563222016
└─sde1 1 gpt part 2048 7999415386112 Solaris /usr & Apple ZFS 1132f529-ece6-47d1-b744-0cafa4441014
zd0 0 disk 5819498496
nvme0n1 Samsung SSD 990 EVO 1TB 0 gpt disk 1000204886016
├─nvme0n1p1 0 gpt part 4096 1048576 BIOS boot 3b884b03-83ba-4c20-8f19-61afd86fa412
├─nvme0n1p2 0 gpt part 6144 536870912 EFI System 5355c241-4ab1-4717-88fb-efcabd5c5f3f
└─nvme0n1p3 0 gpt part 1054720 999664852480 Solaris /usr & Apple ZFS aaf27985-a556-4a56-9551-817b2417c3bd
# sudo zpool status -v
pool: backup_pool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
backup_pool ONLINE 0 0 0
ab7f15a6-7d9f-4f06-8291-afc17a283c68 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:01 with 0 errors on Sat Apr 19 06:45:02 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
pool: pool1
state: ONLINE
expand: expansion of raidz1-0 in progress since Tue Apr 22 22:00:45 2025
7.27T / 8.39T copied at 188M/s, 86.67% done, 01:44:02 to go
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
01ea666c-f3cd-492d-944c-9f21c465089a ONLINE 0 0 0
c67844ad-af23-428d-af26-6a2fa7b66958 ONLINE 0 0 0
342423d7-61ba-4af0-aca3-75b1079282ed ONLINE 0 0 0
1132f529-ece6-47d1-b744-0cafa4441014 ONLINE 0 0 0
errors: No known data errors
Thanks for any insights you have. I don’t see any jobs running that are related to pool expansion but that command you gave me says it’s in progress. Should I be able to see that anywhere in the GUI?
Note: I made backup_pool
to ensure the last drive was working correctly. It seems to work fine. I ultimately want to add it to pool1 also.
The first expansion is still in progress. Wait for it to finish. Then completely destroy the backup pool and start the second expansion using that drive.
I am not sure what the UI should show during expansion. But in some circumstances they can take days to complete.
The reporting of space after expansion in both ZFS and TrueNAS is screwed up.
Do sudo zpool list
if you want accurate stats.
Thanks for this. The UI clearly has some issues but it looks like the underlying file system did the right thing. Now it’s showing the expansion as complete.
pool: pool1
state: ONLINE
scan: scrub repaired 0B in 02:16:11 with 0 errors on Wed Apr 23 22:44:35 2025
expand: expanded raidz1-0 copied 8.45T in 08:55:44, on Wed Apr 23 20:28:24 2025
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
01ea666c-f3cd-492d-944c-9f21c465089a ONLINE 0 0 0
c67844ad-af23-428d-af26-6a2fa7b66958 ONLINE 0 0 0
342423d7-61ba-4af0-aca3-75b1079282ed ONLINE 0 0 0
1132f529-ece6-47d1-b744-0cafa4441014 ONLINE 0 0 0
0245ef9a-a91f-4936-9cf5-505573f01c69 ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
boot-pool 928G 3.16G 925G - - 0% 0% 1.00x ONLINE -
pool1 36.4T 8.45T 27.9T - - 0% 23% 1.00x ONLINE /mnt