Zfs rename worked but does not show files anymore

Datsets are independent filesystems. Think “partitions”…

Moving the files should do it.
mv /mnt/Data/data_yunohost/* /mnt/Data/yunohost/jellyfin/ &

2 Likes

I would just make sure that your datasets match your mounts and expected folder structure.

What you see here:

zfs list -t filesystem -r -o space Data

Should align with this:

zfs mount | grep "/mnt/Data"
1 Like

that is what I planned to do in the first place. The “problem” was that I didn’t want that all the data is written again, as it is a move from one dataset to another. So I checked up a possibility to rename an existing dataset. And as mentioned, it worked with my test, I just don’t know why this one failed :dizzy_face:

Is there any other way besides mv the data?

looks good, I have also removed the jellyfin dataset, just to be sure.

Unfortunately, when I try to epxort the pool Data, I run on the same error.

concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 52, in export
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 55, in export
    zfs.export_pool(pool)
  File "libzfs.pyx", line 1443, in libzfs.ZFS.export_pool
libzfs.ZFSException: cannot unmount '/mnt/Data': pool or dataset is busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 57, in export
    raise CallError(str(e))
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data': pool or dataset is busy
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/export.py", line 174, in export
    await self.middleware.call('zfs.pool.export', pool['name'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data': pool or dataset is busy

Any clue why? Is it, because there is data that does not belong there? Should there not be any data, besides within the datasets?
It would be weird to cleanup the data and datasets and then stuck with an “unexportable” pool :grimacing:

Edit: I had by mistake the option Delete saved configurations from TrueNAS? still marked, after removing it, the export was possible without a problem.

cp -a and then rm, in two steps, would actually be safer.
The data really needs to be in a specific dataset, to be used as mountpoint. As the data is now in a folder under the root dataset, where you are not supposed to store anything (create child datasets and use these), there’s no alternative to copying.

1 Like

yeah, as my data came from multiple locations, I was in the process in structuring and cleaning it up and it seems I got a bit confused.

Yes, that explains, why the dataset was not showing the data size. Seems I made a mistake in the first place, when I rsynced the data into the Data folder structure instead of the dataset.

What about rsync, is this an option, as it is my preferred way of copying data

Why not leverage block-cloning?

Are your datasets using encryption? If not, then check if the pool supports block-cloning:

zpool get feature@block_cloning Data 

If not, you can enable it or upgrade your pool.

Before you do anything, make a checkpoint of your pool to be safe:

zpool checkpoint Data

Check that it exists:

zpool status Data | grep checkpoint

Now upgrade the pool if you’re 100% sure you won’t need to import it into an older system:

zpool upgrade Data

Now you should be able to use the cp -a command above to leverage block-cloning… hopefully.

When the copying is complete, and everything looks good, you can discard the checkpoint:

zpool checkpoint -d Data
1 Like

phu, those are quite some new infos, i don’t know what block-cloning does and I don’t know what a checkpoint does. I would need to read about it, before I run commands. Sadly I don’t find anything on the official docu site.

Thank you @winnielinnie for your help, I will read on the internet and if I feel confident, I will give it a try, otherwise I will stick to rsync the data from the Data root folder to the dataset and let the disks spin and do their things.