Can't extend raidz2 pool

I am using a TRUENAS-MINI-3.0-XL+ – ElectricEel-24.10.0.2.

I am trying to extend my pool by adding in another spare 16tb drive that’s identical to all the rest.

When I go to /storage / manage devices / raidz2 / extend / sdh (16.37 TiB) / extend/

It gives me an error

“[EZFS_BADTARGET] cannot attach /dev/disk/by-partuuid/f4a4cfe2-9c64-4c1b-936b-ae58095557c2 to raidz2-0: raidz_expansion feature must be enabled in order to attach a device to raidz”


Error: concurrent.futures.process.RemoteTraceback:
“”"
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool.py", line 119, in extend
with libzfs.ZFS() as zfs:
File “libzfs.pyx”, line 534, in libzfs.ZFS.exit
File “/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool.py”, line 136, in extend
i[‘target’].attach(newvdev)
File “libzfs.pyx”, line 2318, in libzfs.ZFSVdev.attach
libzfs.ZFSException: cannot attach /dev/disk/by-partuuid/f4a4cfe2-9c64-4c1b-936b-ae58095557c2 to raidz2-0: raidz_expansion feature must be enabled in order to attach a device to raidz

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/lib/python3.11/concurrent/futures/process.py”, line 256, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 112, in main_worker
res = MIDDLEWARE._run(*call_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 34, in _call
with Client(f’ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock’, py_exceptions=True) as c:
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 40, in call
return methodobj(*params)
^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 183, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool.py", line 139, in extend
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BADTARGET] cannot attach /dev/disk/by-partuuid/f4a4cfe2-9c64-4c1b-936b-ae58095557c2 to raidz2-0: raidz_expansion feature must be enabled in order to attach a device to raidz
“”"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 488, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 533, in _run_body
rv = await self.method(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 179, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 49, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/attach_disk.py", line 67, in attach
await extend_job.wait(raise_error=True)
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 436, in wait
raise self.exc_info[1]
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 488, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 522, in __run_body
rv = await self.middleware._call_worker(self.method_name, *self.args, job={‘id’: self.id})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1471, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1377, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1361, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EZFS_BADTARGET] cannot attach /dev/disk/by-partuuid/f4a4cfe2-9c64-4c1b-936b-ae58095557c2 to raidz2-0: raidz_expansion feature must be enabled in order to attach a device to raidz

Any ideas?

Knowing absolutely nothing at all (in general) - this one segment makes me wonder if your pool itself is updated to support this feature

Yes - exactly. The capability of expending by adding drives in new in Electric Eel and if the pool wasn’t created in EE it will need to be “Upgraded”.

When you go to the Storage page of the UI, there should be an Upgrade button against this pool. Click this and once the pool capabilities have been upgraded, you should then be able to expand the pool by adding a new drive.

2 Likes

But please note, whence a pool has been updated, you can’t go back. Thus, Dragonfish will likely never work on that pool again.

I prefer to enable only the features I “need”, through the command line. Then I wait an excessive amount of time before enabling & trying new feature on my file systems.

1 Like

@arwen has a good point in general, though in this case it is necessary because the OP wants to expand his pool.

My own philosophy is as follows:

  1. I am very conservative about upgrades, waiting for the .1 or .2 version before upgrading (so I avoid a lot of the issues that would cause me to fall back to the previous version I was running).

  2. I never want to go back more than one release from the last version I was running in production.

  3. BEFORE I upgrade versions, I do any available pool upgrades on all my non-boot pools. In other words, if I need any features that are in Dragonfish (e.g. Block Cloning) when I switch to EE, I will not be forced to Upgrade to EE features (which will stop me falling back) in order to get the Dragonfish ones I need.

  4. If I find I need a recent feature and I am confident that it is fully stable, then I will upgrade only the specific pool I want to use that feature on.

1 Like

The information you have worked, however I was unprepared for the amount of time it takes to expand the pool. It looks like it’s going to take 12 days. I’m dealing with some thunderstorms here so I’m hoping the server doesn’t have a power outage. I think copying all the data to spare harddrives and recreating the dev, and copying back may have been faster?


pool: tank
state: ONLINE
scan: scrub repaired 0B in 09:42:30 with 0 errors on Sun Nov 10 09:42:33 2024
expand: expansion of raidz2-0 in progress since Wed Nov 13 19:37:34 2024
1.86T / 45.1T copied at 41.5M/s, 4.12% done, 12 days 15:56:57 to go
config:

    NAME                                      STATE     READ WRITE CKSUM
    tank                                      ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        f5394800-4dd6-4ee9-adda-10cd9096a70c  ONLINE       0     0     0
        e0ee934b-bceb-4dd7-a9ae-50c6345edb2f  ONLINE       0     0     0
        01b2cb91-1357-4d3b-877e-b708d32b1f62  ONLINE       0     0     0
        2c3205fd-f985-48fe-b5d7-35598fc129aa  ONLINE       0     0     0
        5a7704d2-b380-47a1-aff4-3fd6dc422178  ONLINE       0     0     0
        470507ad-3fb0-4c2f-a16a-a8c77959af27  ONLINE       0     0     0
        43e170d4-c36a-4f9d-9b0c-23c9efee103e  ONLINE       0     0     0
        96e88faf-5ace-4a0b-8c1f-f45db4a324ba  ONLINE       0     0     0

The vdev expansion needs to touch and redistribute every block of that 45T, so that’s going to take a while.

The good news is that the feature is designed to pick up where it left off if interrupted so you should be good to reboot and keep going if you have a power outage.

3 Likes

One last note, the time also depends on the type of disk. If they are SMR, and not CMR, that could impact the column add.