Pool.Attach job not progressing?

I moved my existing TrueNAS server over to a new NAS/PC enclosure today. I had needed to do this for 3 or so years because the NAS case I had, with a 8 drive NAS cage, had one bad drive position. I wasn’t in a hurry/great need of the ‘lost’ space from one drive, I I proceeded as is until this month.

I acquired another (different brand) ITX NAS case, and moved everything over to it. All started an ran and now TrueNAS could see the long missing hard drive.

I added the single drive to the existing pool of 7 (originally) now 8x4tb drives, after reading and watching a few YouTube tutorials on the how to do this. It is a RAID Z2 configuraiton (always has been), and used the Pool, Extend, then selected the new (and only available new drive) on the list to add to the existing pool/vdev.

It has been three hours now and the job status still says 25% in the pool.attach job. The other jobs that started (I assume) as part of the process (Quickwipeofdisk_xxx, zfs.pool.extend, disksync_all) all ran successfully.
How long is does the pool.attach job normally take in an 8 disk array (4tb/disk)? I’ve seen no change in the percent completion, but I don’t want to risk crashing the array by rebooting the server/TrueNAS.

I’m fairly certain the process is NOT locked up - this case has individual drive activity indicators and all drive indicators are EXTREMELY active.

One additional point: All the disks are showing 35-40mb/s continuous read and write activity, except for the new HDD. It is showing around 16mb/s read, but the same continuous ~40mb/s write activity. I think the difference on the ‘new’ hdd is probably to be expected since it is joining and the receiver of all required pool info/data to enable the expansion.

Configuration 8 bay NAS ITX case, headless setup.
Asus ROG Strix B550-I itex mb w/2 pcie slots.
32gb Ram
1 tb NVME boot/TrueNAS application drive,

2 tb NVME buffer drive

8 WD CMR 4 tb HDDs, SATA6 via a SATA expansion card in one PCIE slot in a RAIDZ2 configuration.

There are no VDEVS; I was/am a newbie to TrueNAS and didn’t see a need for creating VDEVS for my requirements. From what I’ve read, I believe that can’t be done now without ‘dumping’ the entire setup and starting from scratch, which I cannot/won’t do without a really critical reason.

I’ve only seen a couple of other posts on what appear to be the same topic. The indication is this can take days or weeks - something the tutorial YouTube videos failed to mention. I just want to make sure I’m not overlooking something.

???

Search the tag RAID-Zx-expansion
I think you get more accurate info from running the command sudo zpool status -v The Job Status page estimates percentage based on where it is on the job process. In this case. I may be on step completed out of four steps to consider the job complete. It might be more accurate but that is just a way the GUI and those statuses work.

1 Like

Thanks, I ran the command line and it indicates 3.54tb complete of 15.6 tb to be written in the expansion with approximately 12 hrs remaining.

You will want to rely on the command line for the most accurate space reporting also with raid z expansion. That issue should be covered in that tag also

1 Like

Yes, I saw a couple of topics on the inaccurate space reporting in the gui.

Thinking on a future task now - if I understood some of the other threads correctly, an raidz can be ‘upgraded’ a drive at a time, by removing an existing drive, and installing a new larger drive to replace it. Tnas will re slice it z(???) and incorporate it into the pool. My impression that is the simplest, if not necessarily the best method to increase storage in a pool.

I’m sure I can find the answer in the faq/documentation, but thought I’d ask you anyway.

Assuming sufficient free capacity in the pool, is it possible to remove two drives, even if one drive at a time? Then add two new larger drives, create a new vdev and add it to the existing pool? Is that possible or even recomnended vs the single drive at a time? Just looking fir some recommendations/pointers from developing an upgrade strategy .

thanks!

Clayton

All the drives in a VDEV need to be replaced to get the additional space.
A two drive mirror of 4TB would need both drives replaced with larger models. Once resilvering / replacement is completed, you hit EXPAND in the GUI for that VDEV to claim the added space, if the system didn’t do so automatically.

Raid-Z-expansion is a bit different as you are taking something like a three wide Raid-Z1 of 4TB drives and adding a fourth 4TB or larger drive to the VDEV creating a four wide Raid-Z1. You use the EXTEND function in the GUI.

If you are replacing all drive members in a Raid-Z(1,2,3), it behaves just like the first example with the mirror VDEV. You don’t gain any added space until all drives are replaced and the EXPAND happens.

You can also gain space in a pool by adding an addition VDEV
Start with a two wide mirror of 4TB drives in a VDEV and then you add another two wide mirror of 4TB. The same can be done with Raid-Z(1,2,3) VDEVs and pools.

I think you will understand between the ZFS Primer and the Pool Layout Whitepaper. If you have questions or something isn’t clear, just ask on the forum.

iX Systems pool layout whitepaper

1 Like

Thanks, I’ll read the linked documents.

In my case, I’m considering ‘shrinking the array/vdev by two drives from 8x4tb to 6x4tb. Then pill the dismounted drives and install 2x10tb into their own vdev in another raidz (?) config, then stripe (?)in the pool, so the single pool is all the smb share to the network sees.

Does that sound close to right?

We can’t shrink the width Raid-Z VDEV but it can be possible with mirrors and multiple VDEVs. Since you went to 7 wide, the only way to do what you want would be back up all your data elsewhere, destroy the pool and recreate it.
I think you could combine a 6 wide Raid-Z2 VDEV (4TB drives) with a two wide VDEV of 10TB drives but I don’t think I would recommend that. I don’t know how data, IOPS, reads, writes, etc would be distributed or cause performance issues. Someone will correct me if you can’t combine two different types of VDEVs under one pool!

Slowly replacing the members of your 7 wide Raid-Z2 VDEV would be your best choice right now until all are 10TB and EXPANDing the pool / VDEV at the end.

1 Like