Zfs rename worked but does not show files anymore

Edit: added some informaiton regarding the copy of data to datasetA and added some information on the previous usage of datasetB

Hi, I am new to zfs and TrueNas and wonder where my data went.
I had a dataset that I have to renamed.
The problem is that after the rename the space used remains, but the data is not visible. Does anyone know how to get my data back?

Some information on my setup.
I am using TrueNAS Scale 24.10.2
All datasets are created as Multiprotocol, where only NFS is selected.
datasetA had data
datasetB should be the new name. Note, I had created and used the name datasetB already as an NFS share. I have unmounted and deleted all shares of datasetB, before deleting it trough the TrueNAS UI.

A short remark that might help. I don’t know why, but in TrueNAS “datasets” area, the used space of datasetA was not showing, just a few kb but there are some TB on it.
If I recollect correctly, I have imported a zpool from an USB disk that I have created with TrueNAS Scale and then either copied the data to the new zpool via rsync or the replication task of TrueNAS.

Now what I did with zfs rename.
For testing purposes, I have created a new datasetC, copied some data from datasetA on it and rename it to datasetD. This worked smoothly and the copied data was still visible.
Note, the space used within datasetC and datasetD was shown correctly.

Afterwards I approached the full renaming of the existing datasetA.
My command was
zfs rename data/datasetA data/datasetB
This went smooth without any errors and datasetB was created. It is also visible in the datasets area.
The same issue appeared for datasetB, as it shows under used in the datasets area, just a few kb.

When I open the datasetB via ssh, it is empty. Nevertheless, the storage is still used, as stated on the storage dashboard.
So my hope is that I can somehow “rebuild” the data as it is just not visible. Does anyone have a clue?

Thanks in advance!

Since zfs rename is done without the middleware, you probably need to either reboot or export/reimport your pool.

Without more information about the datasets and folders, it’s hard to say exactly why you cannot see your files.

I tried a reboot, without any success, the issues remains.
When I try to export the pool, I get the following error message.

concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 52, in export
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 55, in export
    zfs.export_pool(pool)
  File "libzfs.pyx", line 1443, in libzfs.ZFS.export_pool
libzfs.ZFSException: cannot unmount '/mnt/Data/yunohost/jellyfin': pool or dataset is busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 57, in export
    raise CallError(str(e))
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data/yunohost/jellyfin': pool or dataset is busy
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/export.py", line 174, in export
    await self.middleware.call('zfs.pool.export', pool['name'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data/yunohost/jellyfin': pool or dataset is busy

this is the renamed datasetB. Seems there is still something working in the background?

Be very careful when renaming datasets.

If anything is using the path, including “Apps”, it could cause unpredictable behavior.

When you rename a dataset, you implicitly are changing the filesystem paths and mountpoints.

You should stop all services and Apps before renaming a dataset.

I have not used Apps on TrueNAS, I use it mainly as a simple NAS with nfs shares.
I have added some information to my previous post.
I don’t know how TrueNAS behaves, but I had created datasetB already and used it as an active NFS share.
I have unmounted and deleted the NFS share and also deleted it trough the TrueNAS UI. So from my point of view it looked correct and all connections should have been removed.

Is there anything I can do to get the data back? Currently I do not see what is possible. The only current solution is to delete the “empty” dataset and hope that also the used data will be removed. Then copy the data back from my backup. If possible, I would avoid it, as it will quite some time and if there is a way to fix it, I would like to try it.
I don’t know TrueNAS/zfs mount/unmount logic, so I don’t want to experiment and try to fix things by doing more damage to the current state

does anyone has a clue how to proceed? It seems I have messed up the zfs mount of my pool? Is there a way to unmount the dataset and then export / import the pool again?

Is this just a dataset that is accessed via Jellyfin from another server?

middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data/yunohost/jellyfin': pool or dataset is busy

You’ll need to compare your folders, pool, datasets, and snapshots.

This is a good place to start:

zfs list -t filesystem -r -o space Data

zpool list Data

ls -l /mnt/Data

du -hs /mnt/Data/*

The du command might require “sudo” to get an accurate number.

yes, I use the dataset to share it via nfs. I had it unmounted and the mount point even deleted for /mnt/Data/yunohost/jellyfin before running zfs rename on it.

What does that mean? What did you actually do?


What did you actually rename?

in the “Shares” I have unshared the path /mnt/Data/yunohost/jellyfin.
As stated, I have created /mnt/Data/yunohost/jellyfin and tested it.
After unsharing /mnt/Data/yunohost/jellyfin, I have deleted the dataset /mnt/Data/yunohost/jellyfin from “datasets”.
Then I renamed /mnt/Data/data_yunohost/yunohost.multimedia to /mnt/Data/yunohost/jellyfin

zfs rename Data/data_yunohost/yunohost.multimedia Data/yunohost/jellyfin

I looked up the renaming at https://docs.oracle.com/cd/E19253-01/819-5461/gamnn/index.html

I also made a test and it worked on TrueNAS.

I hope makes it clearer

zfs list -t filesystem -r -o space Data
NAME                                    AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
Data                                    20.2T  16.0T     21.2G   15.1T             0B       905G
Data/TrueNAS_ISOs                       20.2T   620M        0B    620M             0B         0B
Data/proxmox_backup                     20.2T   841G        0B    841G             0B         0B
Data/yunohost                           20.2T  6.65G        0B    170K             0B      6.65G
Data/yunohost-test                      20.2T  56.8G        0B    149K             0B      56.8G
Data/yunohost-test/audiobookshelf       20.2T   512M        0B    512M             0B         0B
Data/yunohost-test/jellyfin             20.2T  56.3G        0B   56.3G             0B         0B
Data/yunohost-test/nextcloud            20.2T   128K        0B    128K             0B         0B
Data/yunohost-test/yunohost.multimedia  20.2T   128K        0B    128K             0B         0B
Data/yunohost/audiobookshelf            20.2T  6.65G        0B   6.65G             0B         0B
Data/yunohost/jellyfin                  20.2T   128K        0B    128K             0B         0B
Data/yunohost/nextcloud                 20.2T   128K        0B    128K             0B         0B
Data/yunohost/yunohost.backup           20.2T   128K        0B    128K             0B         0B
zpool list Data
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Data  54.6T  24.1T  30.5T        -         -     0%    44%  1.00x    ONLINE  /mnt
ls -l /mnt/Data
total 48
drwxr-xr-x 2 root     root         3 Feb 12 18:27 TrueNAS_ISOs
drwx--S--- 2 root     users        2 Sep 28  2023 antivirus_quarantine
-rw------- 1 root     users     8192 Jan  1 18:07 aquota.group
-rw------- 1 root     users    15360 Jan  1 18:07 aquota.user
drwxrws--- 5 root     users        5 Nov  3  2022 data
drwxrws--- 5 www-data www-data     9 Nov  5  2022 data_nextcloud_test
drwxrws--- 6 root     users        6 Feb 10  2024 data_yunohost
drwxrwx--- 6 proxmox  proxmox      6 Feb 12 14:51 proxmox_backup
drwxrwx--- 6 yunohost yunohost     6 Feb 23 22:52 yunohost
drwxrwx--- 6 yunohost yunohost     6 Feb 20 22:25 yunohost-test

du -hs /mnt/Data/* 
620M    /mnt/Data/TrueNAS_ISOs
512     /mnt/Data/antivirus_quarantine
6.0K    /mnt/Data/aquota.group
6.0K    /mnt/Data/aquota.user
1.5T    /mnt/Data/data
25M     /mnt/Data/data_nextcloud_test
14T     /mnt/Data/data_yunohost
842G    /mnt/Data/proxmox_backup
6.7G    /mnt/Data/yunohost
57G     /mnt/Data/yunohost-test

It appears that your files are under /mnt/Data/yunohost-test/jellyfin

this is just for testing, but thanks to your commands I found the data, it seems the rename just created a new dataset but the data remained in the folder strucutre without being moved to the new dataset

14T     /mnt/Data/data_yunohost

:roll_eyes:

Not sure why. You just have to be careful with it, especially if the path is being shared or used by apps or current file transfers.

EDIT: Hold on.

/mnt/Data/data_yunohost is strictly a folder, not a dataset. This means that whatever was in the path /mnt/Data/data_yunohost actually lived in the root dataset Data. Not in any child dataset.

This is probably why your command created a new empty dataset.

@truewas, I edited my post.

Make sure you understand where your data is actually being saved. Understand the difference between folders and datasets.

1 Like

I am new to the TrueNAS and it seems I didn’t get it fully regarding what datasets are. Is there a way to fix it?
Strangely I get now some issues, like

when trying to disconnect the zpool Data

concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 52, in export
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 55, in export
    zfs.export_pool(pool)
  File "libzfs.pyx", line 1443, in libzfs.ZFS.export_pool
libzfs.ZFSException: cannot unmount '/mnt/Data': pool or dataset is busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 57, in export
    raise CallError(str(e))
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data': pool or dataset is busy
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/export.py", line 174, in export
    await self.middleware.call('zfs.pool.export', pool['name'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Data': pool or dataset is busy

Or if I just click on certain datasets, some are working normally, others not.
/mnt/Data/yunohost/jellyfin

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 211, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1529, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1471, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 55, in nf
    res = f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/filesystem.py", line 445, in stat
    raise CallError(f'Path {_path} not found', errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Path /mnt/Data/yunohost/jellyfin not found

It seems like all children below /mnt/Data/yunohost/ can’t be found anymore
Also /mnt/Data/yunohost-test/ has the same problem, but I didn’t touch the system since yesterday

At least you didn’t use any Auxiliary Parameters or save your syslogs on your System Dataset. That could result in all kinds of wackiness! :smiley:

Don’t mind me, as I write up a feature request to permanently remove access to the command-line in the next version of SCALE without the possibility of reenabling it… so we can prevent users from borking their systems. :smiley:

:smiley:


At this point, I really don’t know what happened. It’s possible that your attempt to export the pool “halfway” unmounted the datasets, but failed before it could do a clean export, for whatever reason. This is why the middleware is confused, since it’s expecting mountpoints that do not exist.

You might have to reboot your system.

ok, rebooted and it works again :sweat_smile:
is there anything to check before try to export the pool? Or should I just give it a try again? :upside_down_face: