Created replication job of NFS shared extent. How do i mount far side?

Hello-

Running truenas 25.04.0 on 2 filers. I have an NFS share setup on filer 1 with a replication job setup hourly sending it via snapshot to filer 2. This is setup and working and I can see the dataset on the far side. The problem is that when I click on “Datasets” in the GUI and then click on the replicated dataset, I get an error “[ENOENT] Path /mnt/Pool1/truenas01-nfs01-replica not found”. If I login via shell and go to /mnt/Pool1 and do “ls -lta” i don’t see the folder for this extent. So my question is if I had a failure of filer 1 and needed to bring up the extent on filer 2, how would I go about this? The sizing of the extent on filer 2 matches that of filer 1 so I know the data is there somewhere. Thanks!

OK I believe I solved this myself and am just posting the fix for anyone who stumbles upon this. The issue was that the underlying “Pool1” was readonly. I ran the following commands and after doing so and even after rebooting my dataset is mounted on the replica filer:

root@truenas02[/mnt/Pool1]# sudo zfs set readonly=off Pool1
root@truenas02[/mnt/Pool1]# sudo zfs mount -a
root@truenas02[/mnt/Pool1]# sudo zfs set readonly=on Pool1

1 Like

Replication targets are automatically read-only; removing this attribute breaks replication.

If restoring from backup, replicate (one off) in the other direction, and then make the new primary writable.
You’ve found how to convert the backup directly into the new primary.
In either case, you can remove “readonly” from the GUI.

I think the issue here was most likely the replication was still running when you got the error be it the initial stream or subsequent ones.

Even with readonly on you should still be about to access the share.

Well my problem was the underlying pool (group of vdevs) was in readonly mode, not just the replicated zvols and datasets. Those are certainly in RO mode, but the Pool holding them was ALSO in RO mode. Is that expected? Its entirely possible that I didn’t create the Pool correctly on the DR filer and this setting was not correct to begin with. Should my pool be in RO or RW when its a target for replicated zvols and datasets?

Were you by any chance replicating to the root dataset of your backup pool?

Pools are almost never in RO mode unless imported that way so let’s be clear we are talking about datasets be it the root or sub-datasets.

It’s perfectly natural for replicated datasets to be RO as this is default expected behaviour.

Its always a good idea to never store data in your root dataset and instead create at least one if not more (what I call stub) datasets below the root to act as a control point for other datasets to hang off including replicated ones.

For example I always create a ZFS dataset on all my pools off the root and then an SMB one to hold my SMB datasets and an NFS one to hold my NFS datasets. I find this works well to compartmentalise. I can then use my ZFS dataset to create my snapshot schedules and replication to cover all sub-datasets. I then mirror this setup on my backup systems.

1 Like

Yea my replication job targets my root dataset and subsequently creates its target upon first replication (allow replication from scratch). So that is likely the issue. I have no problem with my root dataset being RW as eventually I may want to create shares in the DR side that get replicated back to prod so I would need RW on the root dataset anyway to pull that off. This makes more sense now that I understand what might have happened.

Just tread very carefully with this as if you create a sub dataset within your replicated branch it will be wiped out at next replication. Again this is where stub datasets can be helpful to create different branches in your pool.

1 Like