TrueNAS or ZFS USB HDD connection stable

@Protopia what @etorix said was “Remove” cleanly removes the device (applies only to mirror pools, of which stripes are a degenerate case) and I don’t have a mirrored pool.

Since I have two proceed I just initiated the “Remove” job, it is running since this afternoon.
Seems to take a looooooong time.

What he meant was that a single drive vDev is effectively an unmirrored mirror vDev (as opposed to a RAIDZ vDev) and so Remove will work on it.

The Remove will take some time because it is moving data. If either of your drives are SMR (and especially if your 4TB drive is SMR) it will take 10x or 100x as long as a CMR drive.

1 Like

As explained above already:

You have a mirror layout: Think of a single-drive vdev as “1-way mirror”.

I stopped all VMs and apps before the Remove, but Remove caused errors.
However, when I look at my pool I see that the VDEV has been removed.
Also the Storage dashboard shows the correct used capacity.

The remove job ended with 2 error tasks:

pool.remove:
[EFAULT] Failed to wipe disks:

  1. sdb: [Errno 16] Device or resource busy: ‘/dev/sdb’

Quick wipe of disk sdb
[Errno 16] Device or resource busy: ‘/dev/sdb’

Since the faulty HDD is not in the pool I assume it is safe to physically remove the HDD.
What is the best strategy after physically replacing the faulty 6TB HDD?

  1. Add the new 6 GB HDD to the pool, and remove the 4TB HDD using “Remove” option
  2. Use the “Replace” option on the 4TB HDD to replace it with the 6 GB HDD

Option 2 is simpler.

OK, that will be my approach.
Thanks @etorix and @Protopia for your guidance!

So now |I have both 6TB data HDD’s in, 1 with data on it and 1 unassigned.
@Protopia wrote " IMO you really need to create a new pool and replicate your data to it."

I tried to create a new mirrored pool, but that does not succeed. The pool creation wizard needs to 2 VDEV’s and 1 only have 1 unassigned drive, so it needs a different approach. Can you help out?

Go to Storage > Pool, then Status, click on the 3-dot menu for the sole drive and select “Extend” to turn your single drive (= 1-way mirror) into a 2-way mirror.

1 Like

Succeeded, thanks again!

3 Likes

Resilvering completed, now I have 1 Mirror with 2 VDEV’s sized 5,46 TB.
However Storage Dashboard Usage shows usable capacity 3,51 TB.
Seems the larger 6 TB disks are NOT fully used.

Looking at the allocation:
root@truenas[~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5.5T 0 disk
└─sda1 8:1 0 3.6T 0 part
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part
zd0 230:0 0 32G 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
├─nvme0n1p3 259:3 0 915G 0 part
└─nvme0n1p4 259:4 0 16G 0 part
└─nvme0n1p4 253:0 0 16G 0 crypt

So the sda VDEV does not use full capacity.
Can I just “Remove” the sda VDEV and somehow redo the partitioning?

I think you mean 1x vDev which is a mirror.

Please run the following and copy and paste the results (between ``` lines so that output formatting is maintained):

  • lsblk -bo NAME,MODEL,PTTYPE,TYPE,START,SIZE,ROTA,PARTTYPENAME,PARTUUID
  • sudo zpool status

'root@truenas[~]# lsblk -bo NAME,MODEL,PTTYPE,TYPE,START,SIZE,ROTA,PARTTYPENAME,PARTUUID
NAME MODEL PTTYPE TYPE START SIZE ROTA PARTTYPENAME PARTUUID
sda WDC WD60EFPX-68C5ZN0 gpt disk 6001175126016 1
└─sda1 gpt part 4096 4000785105408 1 Solaris /usr & Apple ZFS 3aa296ab-1291-48b7-9258-e13c29d63ace
sdb WDC WD60EFPX-68C5ZN0 gpt disk 6001175126016 1
└─sdb1 gpt part 4096 6001172414976 1 Solaris /usr & Apple ZFS c0518ad0-af00-48cf-b71a-4d2f0ec356a5
zd0 gpt disk 34359754752 0
nvme0n1 Samsung SSD 980 1TB gpt disk 1000204886016 0
├─nvme0n1p1 gpt part 4096 1048576 0 BIOS boot 5332236c-107e-4924-81c5-d5060c237f15
├─nvme0n1p2 gpt part 6144 536870912 0 EFI System d8cfe5eb-7698-4c84-80cd-7aee10eb4b9c
├─nvme0n1p3 gpt part 34609152 982484983296 0 Solaris /usr & Apple ZFS eab2f387-ad49-42ec-9a74-119ea52a895d
└─nvme0n1p4 gpt part 1054720 17179869184 0 Linux swap 0b7645fa-4722-4390-9dfc-192a60432168
└─nvme0n1p4 crypt 17179869184 0

'root@truenas[~]# sudo zpool status
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:18 with 0 errors on Sun Nov 3 03:45:20 2024
config:

    NAME         STATE     READ WRITE CKSUM
    boot-pool    ONLINE       0     0     0
      nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

pool: tank
state: ONLINE
scan: resilvered 1.48T in 04:02:50 with 0 errors on Wed Nov 6 18:47:05 2024
remove: Removal of vdev 1 copied 798G in 15h35m, completed on Tue Nov 5 01:49:36 2024
4.59M memory used for removed device mappings
config:

    NAME                                      STATE     READ WRITE CKSUM
    tank                                      ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        3aa296ab-1291-48b7-9258-e13c29d63ace  ONLINE       0     0     0
        c0518ad0-af00-48cf-b71a-4d2f0ec356a5  ONLINE       0     0     0

errors: No known data errors

The SCALE partionning bug strikes again…
Yes, remove and reattach would be a way to solve it.

Great - so it definitely is a mirror.

We just need to help you sort out the size issue. I suspect (but I do not know) that you need to expand the size of the raw partition /dev/sda1 to be the same size as /dev/sdb1 and ZFS should expand to fit.

There is a zpool attribute for autoexpand if memory serves me - sudo zpool get autoexpand tank.

'root@truenas[~]# zpool get autoexpand tank
NAME PROPERTY VALUE SOURCE
tank autoexpand on local

I found a thread where something similar was mentioned. The solution was: ‘I decided to proceed with detaching and reattaching the drives from the pool’s VDEV one by one’
@etorix also mentions: “remove and reattach”, so I will try t o ‘Detach’ and add it later on…

Started ‘Detach’ on sda and after finishing ‘Expand’ on sdb and now Storage Dashboard Usage shows usable capacity 5.33 TB and \lsblk’ shows:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5.5T 0 disk
└─sda1 8:1 0 5.5T 0 part
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part
zd0 230:0 0 32G 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
├─nvme0n1p3 259:3 0 915G 0 part
└─nvme0n1p4 259:4 0 16G 0 part
└─nvme0n1p4 253:0 0 16G 0 crypt

Thanks again @Protopia and @etorix : operation succeeded!

1 Like