Hello TrueNAS friends,
I have run different versions of TrueNAS for a very long time, and have just made the jump from FreeNAS to SCALE, which went well. I have a RAIDZ1 with 6 disks of 8TB disks, and wanted to add 2 more disks. When I added the first one, it sat at 25% for a very long time, and I kept running sudo zpool status -v to watch its progress. Well, 2 weeks later it was still running, and then it just… stopped? It stopped being a thing in the running jobs, and still looks like it’s running using sudo zpool status -v, but no longer changing or progressing. Help?
Thanks so much!
Also, I’ve seen other similar posts with different commands run to give visibility to what’s going on, so here are some results:
Linux TrueNAS-Dreamland 6.6.44-production+truenas #1 SMP PREEMPT_DYNAMIC Tue Jan 28 03:14:06 UTC 2025 x86_64
TrueNAS (c) 2009-2025, iXsystems, Inc.
All rights reserved.
TrueNAS code is released under the LGPLv3 and GPLv3 licenses with some
source files copyrighted by (c) iXsystems, Inc. All other components
are released under their own respective licenses.
For more information, documentation, help or support, go here:
http://truenas.com
Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI, CLI, and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.
Welcome to FreeNAS
Last login: Thu Mar 27 18:51:30 CDT 2025 on pts/3
root@TrueNAS-Dreamland[~]# 2025 Mar 28 13:03:23 TrueNAS-Dreamland Device: /dev/sdb [SAT], 16 Currently unreadable (pending) sectors
2025 Mar 28 13:03:23 TrueNAS-Dreamland Device: /dev/sdb [SAT], 16 Offline uncorrectable sectors
2025 Mar 28 13:03:24 TrueNAS-Dreamland Device: /dev/sdb [SAT], 16 Currently unreadable (pending) sectors
2025 Mar 28 13:03:24 TrueNAS-Dreamland Device: /dev/sdb [SAT], 16 Offline uncorrectable sectors
root@TrueNAS-Dreamland[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE START SIZE PARTTYPENAME PARTUUID
sda HUH728080ALE601 1 gpt disk 8001563222016
├─sda1 1 gpt part 128 2147483648 FreeBSD swap 8dea0485-fab4-11ef-a54a-18c04d39bbb3
└─sda2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS 8e01acaf-fab4-11ef-a54a-18c04d39bbb3
sdb ST8000DM004-2CX188 1 gpt disk 8001563222016
└─sdb1 1 gpt part 2048 8001562156544 Solaris /usr & Apple ZFS 53b7fbad-bc9b-4dca-a135-4e4c0e43a463
sdc HGST HUH728080ALE604 1 gpt disk 8001563222016
├─sdc1 1 gpt part 128 2147483648 FreeBSD swap d937b99f-e48e-11ee-8a61-18c04d39bbb3
└─sdc2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS d951dbc2-e48e-11ee-8a61-18c04d39bbb3
sdd ST8000DM004-2CX188 1 gpt disk 8001563222016
├─sdd1 1 gpt part 128 2147483648 FreeBSD swap 752fe6d1-f90b-11ea-a140-00e04c680398
└─sdd2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS 7af72817-f90b-11ea-a140-00e04c680398
sde HGST HUH728080ALE604 1 gpt disk 8001563222016
├─sde1 1 gpt part 128 2147483648 FreeBSD swap 5c8ec18b-df53-11ee-a988-18c04d39bbb3
└─sde2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS 5c95eb71-df53-11ee-a988-18c04d39bbb3
sdf ST8000DM004-2CX188 1 gpt disk 8001563222016
├─sdf1 1 gpt part 128 2147483648 FreeBSD swap a41870f5-6087-11ed-b11d-18c04d39bbb3
└─sdf2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS a43804f5-6087-11ed-b11d-18c04d39bbb3
sdg ST8000DM004-2CX188 1 gpt disk 8001563222016
├─sdg1 1 gpt part 128 2147483648 FreeBSD swap 7546d7a9-f90b-11ea-a140-00e04c680398
└─sdg2 1 gpt part 4194432 7999415652352 Solaris /usr & Apple ZFS 7ae3e41e-f90b-11ea-a140-00e04c680398
sdh PNY CS900 240GB SSD 0 gpt disk 240057409536
├─sdh1 0 gpt part 40 272629760 EFI System 6b0aae35-1c87-11eb-9232-18c04d39bbb3
├─sdh2 0 gpt part 34086952 222600101888 FreeBSD ZFS 6b1114fc-1c87-11eb-9232-18c04d39bbb3
└─sdh3 0 gpt part 532520 17179869184 FreeBSD swap 6b0e4b98-1c87-11eb-9232-18c04d39bbb3
nvme0n1 SK hynix BC501 HFM256GDJTNG-8310A 0 gpt disk 256060514304
├─nvme0n1p1 0 gpt part 40 272629760 EFI System bcd7f28f-1895-4d02-9608-1ffe4f687019
└─nvme0n1p2 0 gpt part 532520 255787847168 Solaris /usr & Apple ZFS 9459799a-a535-42e1-81fc-34d2596e5495
root@TrueNAS-Dreamland[~]# lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 IOMMU
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0]
00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0]
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus A
00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus B
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 7
01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 xHCI Compliant Host Controller (rev 01)
01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller (rev 01)
01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge (rev 01)
02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
06:00.0 Non-Volatile memory controller: SK hynix BC501 NVMe Solid State Drive
07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Picasso/Raven 2 [Radeon Vega Series / Radeon Vega Mobile Series] (rev c9)
07:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Raven/Raven2/Fenghuang HDMI/DP Audio Controller
07:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor
07:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1
07:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1
07:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
08:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 61)
root@TrueNAS-Dreamland[~]# sudo sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved
Adapter Selected is a LSI SAS: SAS2008(B1)
Controller Number : 0
Controller : SAS2008(B1)
PCI Address : 00:05:00:00
SAS Address : 5000000-0-8000-0000
NVDATA Version (Default) : 14.01.00.08
NVDATA Version (Persistent) : 14.01.00.08
Firmware Product ID : 0x2213 (IT)
Firmware Version : 20.00.07.00
NVDATA Vendor : LSI
NVDATA Product ID : SAS9211-8i
BIOS Version : N/A
UEFI BSD Version : N/A
FCODE Version : N/A
Board Name : 6Gbps SAS HBA
Board Assembly : N/A
Board Tracer Number : N/A
Finished Processing Commands Successfully.
Exiting SAS2Flash.
root@TrueNAS-Dreamland[~]# sudo sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02)
Copyright 2008-2017 Avago Technologies. All rights reserved.
No Avago SAS adapters found! Limited Command Set Available!
ERROR: Command Not allowed without an adapter!
ERROR: Couldn't Create Command -list
Exiting Program.
root@TrueNAS-Dreamland[~]# sudo zpool status -v
pool: POOL1
state: ONLINE
scan: resilvered 4.35T in 1 days 00:03:30 with 0 errors on Fri Mar 7 12:01:36 2025
expand: expansion of raidz1-0 in progress since Sat Mar 15 10:27:08 2025
32.0T / 36.3T copied at 29.6M/s, 88.21% done, 1 days 18:04:11 to go
config:
NAME STATE READ WRITE CKSUM
POOL1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdf2 ONLINE 0 0 0
sdd2 ONLINE 0 0 0
sdg2 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
53b7fbad-bc9b-4dca-a135-4e4c0e43a463 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using ‘zpool clear’ or replace the device with ‘zpool replace’.
see: Message ID: ZFS-8000-9P — OpenZFS documentation
scan: resilvered 8.08M in 00:00:00 with 0 errors on Thu Mar 27 18:01:37 2025
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-PNY_CS900_240GB_SSD_PNY27202006240608828-part2 ONLINE 0 0 2
nvme0n1p2 ONLINE 0 0 0
errors: No known data errors
root@TrueNAS-Dreamland[~]# sudo zpool import
no pools available to import
root@TrueNAS-Dreamland[~]#
The new drive appears to be /dev/sdb and this is a Seagate BarraCuda which is a desktop drive and is SMR - so NOT a good choice, and in fact a VERY bad choice. And that is why the expansion has been taking so long.
I see that /dev/sdd and /dev/sdf and /dev/sdg are the same - so 4 of the 7 disks are SMR - and this is NOT good news for write performance especially during expansion or resilver.
You should seriously consider replacing them with CMR drives before you expand again.
However, the priority is to get the current expansion finished. And I would think that the best way to do this would be to reboot. Expansions are supposed to continue after a reboot so if it genuinely hasn’t finished then it should continue, and if it has already finished but failed to tell you then it should then say it is complete.
I should however warn you that whilst the pool is supposed to survive a reboot, there are lots of anecdotal examples of pools that fail to import upon a reboot - so this will be a risk, but you will have to reboot at some point anyway.
6-wide raidz1 is not very secure. 8-wide would be even less…
I’d say that the priority is to make a backup before it goes sour. And the I would replace in place the SMR drives by CMR drives, beginning with the current sdb.
I would agree that if you were starting from scratch you should be building a RAIDZ2. But since you cannot switch from RAIDZ1 to RAIDZ2 without rebuilding, that can be impractical when you built a NAS to hold your data centrally if you have nowhere else with remotely equivalent storage capacity.
We are talking about buying at least four CMR drives to replace these Barracuda which really want to eat your ZFS pool… This could be an opportunity to start with a whole new pool in a new layout.
Yup, it looks like I need 4 more cmr disks, and start from scratch. Up until this point I wasn’t really aware of what the difference there is. Thank you for your help!