I have a simple replication task doing a full filesystem (recursive) replication of my SSD pool to a remote destination. The pool contains my ix-applications folder.
Everything seems to work fine, but when it finishes I get a warning icon and the following error message: cannot mount '/mnt/Wright/Backup/MirroredSSDs/ix-applications/k3s/kubelet': failed to create mountpoint: Read-only file system
The file path in the error is the replication destination. Per usual replication config, I have SET the destination filesystem as read-only. Isn’t this standard? So why am I getting the above error? Why is the system trying to mount the destination path, and why is it trying to mount k3s/kubelet? Is this just a red herring?
I’m content to ignore the warning, but would prefer everything to look green…
Right… but the task can be configured to require, set, or ignore the readonly flag on the destination dataset. My task is set to “SET”, which I believe is normal for a replicated backup, no? Which leads to my other questions
exactly, I find that strange - I would expect since the apps pool is already set to the correct one (MirrorredSSDs), it shouldn’t be trying to mount any other.
I was on Cobia at the time of posting and have since upgraded to Dragonfish to see if the error would go away, but no dice.
I can’t be the only one replicating their apps pool and encountering this issue, can I??
Probably an overlooked bug. Perhaps they didn’t expect the “ix-applications” dataset (and its children) to be replicated locally to the same server. Hence, you unearthed this unexpected quirk.
K3s is being dropped for Docker, anyways. So there might be no incentive to fix this bug, especially since it’s more of a nuisance than anything.
Actually I do have an app pool on the remote machine as well… but is it the local machine trying to mount the remote machine’s dataset or the remote machine trying mount it?
I guess I’m content to ignore it until Electric Eel comes out…
i just ran into this doing zfs replication from my TrueNAS to my linux workstation. If you check “Include Dataset Properties” it includes the mountpoints for each of the datasets that you’re replicating. Uncheck this property and it won’t replicate those mountpoints - which are what causes your remote zfs system to try to mount them.
On second look, there’s an ambiguity with this statement:
Because I thought he meant “uncheck this box”, as in the “property” being a toggle for the Replication options:
It’s one thing to specifically exclude a particular ZFS property (e.g, “mountpoint”). But I was advising against unchecking the option (which I thought he was referring to with the word “property”) that reads “Include Dataset Properties”.
Regardless, I think TrueNAS’s middleware is trying to automatically mount a dataset that has the string “k3s” in its name. Because as far as I know, the OP did not mention any warnings about the other datasets failing to automatically mount. Only the “Apps” dataset.
@bketelsen that’s an interesting observation/idea, thank you.
I guess @winnielinnie is right though, generally we’d want to replicate all properties in order to be able to restore the most accurate backup if/when necessary…
I don’t suppose TrueNAS is going to be able to distinguish which dataset/pool is the original vs which is a backup, we know because we set up the task but otherwise the whole point is a perfect clone, right?
I guess it’s strange that the middleware will want to mount any pool that might have ix-applications/k3s rather than only the one specified as the Apps pool.
I also have a similar replication task to a local external hard drive and I get the same error.
It really seems like the short term solution is to ignore the orange warning sign
I appreciate all the insight thus far on this issue.