ZFS Snapshot empty on second host

Good day everyone!

I currently have two severs/pools (POOL1 and POOL2 respectively) in which:

POOL1 - is where my data is located

POOL2 - is for redundancy.

There are three different datasets on Pool1 in which:

“Pool1” - is my main data.
“Lab” - is my ESXi lab data.
“Apps” - is data for my services/containers.

I have set up a replication task where “Pool1” snapshots are replicated to POOL2 and now I’m trying to do the same for the “Apps” dataset. However, when the task runs, the directory that hosts the “Apps” dataset, on POOL2, is empty and only contains sub-directories and not files.

I understand that whatever changes that are made to the snapshot is what is saved but when I did the initial replication for “Pool1,” all the data was transferred over.

Any help?

Are you saving data directly inside your root dataset that is named Pool1?

What version of TrueNAS is this?

What is the dataset hierarchy / layout?

Did you already create datasets on the destination Pool2, or did you let the replication’s “first run” create them for you?

Yeah when I think about it, maybe I should change the name of the dataset.

Pool1 is a dataset. My root dataset is called Core (/mnt/core/pool1).

I’m using Dragonfish-24.04.2.3.

I created a dataset “Apps” on destination Pool2 and ran the replication after.

So essentially the layout is:

POOL1:
/mnt/core/pool1 - Main data
/mnt/core/lab - Lab data
/mnt/core/app - App/Container data

POOL2:
/mnt/core/pool1 - “Pool1” snapshot data
/mnt/core/pool2 - empty dataset
mnt/core/app - “Apps” data from POOL1

zfs-naming

Are you The Joker? What is this anarchy? :rofl:

A pool that is named “Core” that lives on SCALE, with a root dataset named “Pool”.


I think you might be confusing directories for datasets, or the other way around.

This information can help:

zfs list -r -d 1 -t filesystem -o space Pool1
zfs list -r -d 1 -t filesystem -o space Pool2

For future reference, it’s usually better to allow the replication task, itself, to create the destination dataset, rather than you create it on the destination ahead of time.

2 Likes

Look man I’m not the best at naming stuff :joy:

Anyways I renamed them to something more fitting and now seperated them like this:

UNIT-1:

/mnt/UNIT1/apps
/mnt/UNIT1/home
/mnt/UNIT1/lab

UNIT-2:
/mnt/UNIT2/apps
/mnt/UNIT2/home
/mnt/UNIT2/lab

I deleted all the tasks and snapshots off both servers and made a task for each one except a sub-dataset which is under lab (lab/os-images) and now each dataset is doing the same thing EXCEPT the aformentioned dataset which wasn’t included in the previous snapshots. I’m guessing based on this is the “data” from UNIT-1 is already present on UNIT-2 and it doesn’t need to transfer anything else unless it’s not present? Do I have that right?

That’s why the above commands can help clue us in.

My bad, here’s the output

root@unit1[~]# zfs list -r -d 1 -t filesystem -o space UNIT-1
NAME            AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
UNIT-1          2.75T   776G        0B    104K             0B       776G
UNIT-1/.system  2.75T   136M        0B    112K             0B       136M
UNIT-1/apps     2.75T  1.82G        0B    104K             0B      1.82G
UNIT-1/home     2.75T   603G        0B    120K             0B       603G
UNIT-1/lab      2.75T   171G        0B    104K             0B       171G
root@unit2[~]# zfs list -r -d 1 -t filesystem -o space UNIT-2
NAME         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
UNIT-2       3.49T  25.4G        0B    139K             0B      25.4G
UNIT-2/home  3.49T   139K        0B    139K             0B         0B

Can you change the “depth level” for UNIT-2? It appears you have an additional nested layer, as compared to UNIT-1.

zfs list -r -d 2 -t filesystem -o space UNIT-2

If -d 2 still doesn’t show the relevant children, try -d 3.

root@unit2[~]# zfs list -r -d 2 -t filesystem -o space UNIT-2
NAME         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
UNIT-2       3.49T  25.4G        0B    139K             0B      25.4G
UNIT-2/home  3.49T   139K        0B    139K             0B         0B
root@unit2[~]# zfs list -r -d 3 -t filesystem -o space UNIT-2
NAME         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
UNIT-2       3.49T  25.4G        0B    139K             0B      25.4G
UNIT-2/home  3.49T   139K        0B    139K             0B         0B

Are you sure that’s the full output? Is any text being cut off?

Yep that’s the full output…

The only thing that’s on UNIT-2 is that home dataset (I deleted the others just to simply troubleshooting) and a zvol hosting a running virtual machine…

How is there 25.4 GiB being used by children, and yet no children datasets exists?

The 25gb’s are coming from the zvol.

Wish I could provide a photo or link but it’s not letting me.

But basically its

UNIT-2/ (25.42 GiB / 3.49 TiB)

UNIT-2/home/ (138.53 KiB / 3.49 TiB)
UNIT-2/Pihole-wupnpu/ (25.39 GiB/ 3.52 TiB)

That’s what shows on the datasets page.

So this is a “before replication” overview of UNIT-2?

I guess it’s hard to say one way or another until you try to replicate.

Just replicated. Still getting the same results… hm

root@unit2[~]# zfs list -r -d 2 -t filesystem -o space UNIT-2
NAME         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
UNIT-2       3.49T  25.4G        0B    139K             0B      25.4G
UNIT-2/home  3.49T   139K        0B    139K             0B         0B

All the replication task does on UNIT1 is go to a “Running” state for like 5 seconds and then it goes to a “Finished” state

Maybe there’s a log I can find or something I can find to see what’s going on in the background…

Then it sounds like you’re not doing a recursive replication.

2 Likes

Bingo.

I think that may be the issue. After deleting all the snapshots again and creating and new task and running, bam its copying over the files!

So recursive includes ALL the files beneath the dataset but not under child datasets?

“Recursive” means to include all datasets underneath the selected source dataset, no matter how deep the “nest” goes.

1 Like