Empty? replication from TrueNAS Core to TrueNAS Scale

Hello,

I am trying to move a pool hosted on TrueNAS Core to a new system on TrueNAS Scale. Both instances are virtualized in a Proxmox VM from 2 different nodes.
I am pulling the pool of the old system from the new system. Everything seems to run fine (even though the replication job sits at 0%, while still sending data). The problem is that I mounted an NFS share of the new pool on Proxmox and I noticed that the share is actually empty, showing 0% used space, whereas on TrueNAS it reports a total allocation of 2.1 TB, and counting. I created a manual snapshot on the old system beforehand to make sure the snapshot would not expire during the replication.
I could let the task run until the end but I am afraid something went wrong in the process and that it is actually not copying anything.
TrueNAS Scale version is ElectricEel-24.10.0.2.
TrueNAS Core version is TrueNAS-13.0-U6.1.


On this screenshot, it seems that there is no data being actively written in the new pool.


Here are my replication settings.


This is what I have on Proxmox.

Let me know if you need more information, thank you.

I have a feeling this has to do with read/write permissions and/or mounting issues, as I have frequently seen on similar topics. I lack the knowledge to independently solve it so I am looking for pointers for my specific case.

I spoke too soon. Turns out pool size decreases on Proxmox as the replication progresses.


Additional questions would be: Is this an expected behavior? Is this what I want for what I am trying to achieve? Are there better ways to move a dataset to another server? Would physically moving the dataset to the new system before replicating rather than transferring over the network help in any way?

I cancelled the task and tried to correctly set up permissions but the same outcome occurs. I am starting to think that the data written is not supposed to be visible until the replication is complete. Please tell me if this is incorrect.
I am now considering exporting the pool and moving it to the new system but I would like to avoid downtime, hence my preference for replication.

Thank you.

Hard to say. You didn’t really describe your setup and network nor how much data is replicating.

Just making sure you have seen at least an article on virtualizing TrueNAS so you don’t lose your data.
https://www.truenas.com/blog/yes-you-can-virtualize-freenas/

The pool from the source system is a striped pool of 2x16 TB and the destination system pool is 4x22 TB in RAIDZ1. Data to replicate is approximately 30 TB. Both systems are connected to the same network but they are not on the same device.
Thank you for the article, I will give it a read.
Losing my data would be quite the hassle to rebuild but honestly not that big of a deal.
I am simply looking for the most efficient way to send all my data to the new system with minimal to no downtime.

I think just set it up and let it run at this point. You can keep and eye on both GUI interfaces of Core and Scale and watch the dashboards. I think it will just take a while to transfer all that data. I’m not sure on seeing the data, as it may be hidden until complete, like you said

You are right, I’ll wait for the replication to end and see what happens then.

Another issue I am encountering during replication is the “[EFAULT] Failed retreiving GROUP quotas for tank” error I get whenever I access the Dataset section on TrueNAS. This does not happen when no replication is running. Here are the logs:

concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 76, in get_quota
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 78, in get_quota
    quotas = resource.userspace(quota_props)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "libzfs.pyx", line 3800, in libzfs.ZFSResource.userspace
libzfs.ZFSException: cannot get used/quota for tank: dataset is busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 80, in get_quota
    raise CallError(f'Failed retreiving {quota_type} quotas for {ds}')
middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for tank
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 208, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1526, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1457, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset_quota.py", line 48, in get_quota
    quota_list = await self.middleware.call(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1626, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1465, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1471, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1377, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1361, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for tank