Can't find any way or option to delete unwanted dataset in the GUI

I am simply trying to delete my one and only test dataset.
This is a simple single drive stripe.
I removed the smb share that I tested on it.
I also moved the system dataset back to the boot drive.
Tried rebooting.
There are NO options to actually remove the dataset that I can find.
I also can’t remove the VDEV and get this error:
" cannot remove /dev/disk/by-partuuid/c1291477-5a37-498c-b281-a40ebb6e1cfd: out of space"

There are no services or items tied to the VDEV or dataset that I am aware of.
This is a very basic out of the box configuration.

I also removed any scheduled data protection services that were tied to the dataset. and rebooted.

You can’t remove a vdev if the data has nowhere else to go. Since the pool is a single drive, it’s impossible to remove the vdev.

Do you want to re-use the drive? You can export the pool and then wipe the drive.

1 Like

Hi.
I do not want the data on the drive.
Is there a way to wipe it from the UI?
Just want to wipe it and physically remove the drive.
Wiping it in place would be a plus.
But I am not seeing any way to do this in the interface.
Do not want to move the data anywhere.
Thanks.

Yes.

You must first export the pool. Now the GUI will allow you to wipe the drive. Please use caution.

1 Like

Thanks!
I’m a bit new to this.

Is there a way to do this without exporting the pool?
Or is this really all you get here?
I could also physically remove the drive and see if it can be “deleted” after the drive is gone.

My intention is to just delete it.
Not to spend time exporting it somewhere.
And then having to deal with deleting the export.
Is there a way to just delete it without exporting it?
The interface seems to indicate it can be deleted.
If I try I get a string of errors.
Which I do not immediately see from that requires it to be exported.
Although that might be why.

Error: concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 71, in __zfs_vdev_operation
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 76, in __zfs_vdev_operation
    op(target, *args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 89, in impl
    getattr(target, op)()
  File "libzfs.pyx", line 2386, in libzfs.ZFSVdev.remove
libzfs.ZFSException: cannot remove /dev/disk/by-partuuid/c1291477-5a37-498c-b281-a40ebb6e1cfd: out of space

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 116, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 47, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 41, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 178, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 125, in remove
    self.detach_remove_impl('remove', name, label, options)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 92, in detach_remove_impl
    self.__zfs_vdev_operation(name, label, impl)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 78, in __zfs_vdev_operation
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOSPC] cannot remove /dev/disk/by-partuuid/c1291477-5a37-498c-b281-a40ebb6e1cfd: out of space
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 515, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 560, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 174, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 48, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/pool_disk_operations.py", line 229, in remove
    await self.middleware.call('zfs.pool.remove', pool['name'], found[1]['guid'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1005, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 728, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 734, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 640, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 624, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EZFS_NOSPC] cannot remove /dev/disk/by-partuuid/c1291477-5a37-498c-b281-a40ebb6e1cfd: out of space

I also do not see any reference to “pools” in the interface.
And I do not see any options to “export” the dataset anywhere.
TrueNAS System Version: 25.04.2.4
Community Edition.

Have you looked at the Documentation? It may help with images

1 Like

I will spend some time here in the documentation.
Within the interface I do not see any reference to pools.
If it’s even possible I may have never actually created one unless it was sone automatically when I setup the test dataset.
The only place I see pools at all is “create pool” on the main dashboard page.
I do not see pools appear anywhere else.
I had quickly and out of the box created a single drive stripe with a test dataset.
Created a user, created a smb share and tested that.
I then removed the smb share and turned off the smb service.
And am left with only the test dataset (which I simply want to just remove).
And wipe the drive.
I do not see any pool anywhere.
I see only my test dataset in place and no options to export it.
I see an option to delete the VDEV but get the error presented (out of space)
If I try this.
I see no options to export the VDEV or the dataset.

I imagine I can just pull the drive out.
This is the ONLY data drive in the system aside from the boot drive.
I’m not going to hurt anything that matters.
Really I am just trying to learn something here versus doing exactly that.

Post a screenshot of your GUI, Storage page and a Datasets page

1 Like

It was the little buttons in the upper right of storage manager.
I was able to export/disconnect.
So this is probably going to work now.
I have no idea where the “export” went.
Possibly to the boot drive?
It did not prompt or ask.
I suspect this did not actually “export” any data anywhere but put the pool or VDEV in a state of “exported” and ready to be “imported” back or into a different system.

Reading the doc page clued me into this as I was ignoring or not noticing the buttons in the upper right before.
Just the way my brain works sometimes.

This was so simple that I missed it.

I totally get it now though.
Just had to trip over myself publicly for a minute to get it done.
All good.
Thank you very much for the help.