I want to mark an archival pool as readonly=on, to prevent accidental writes.
I cant find an option in the SCALE UI, and of course it can only be set as readonly on import (which TN middleware controls) - theres no enduring readonly flag I can set on the pool.
I could import manually in shell but then Id have to do that every boot, and TN wouldnt know about it for sharing purposes so thats a non-starter.
So if TN imports it, how can I ensure its kept readonly?
Yes, set the top level dataset as Read Only from the command line:
sudo zfs set readonly=on POOL
There may be issues with sharing protocols. I don’t know if they will work with R/O source file systems. (Sharing protocols may want to update access times and other such things, which can’t happen when using R/O.)
For those that don’t know, every pool has a top level dataset with the same name as the Pool. (When starting to use ZFS back in 2007, I did not know that, and messed something up.)
I cannot emphasize how much I hate that and I will never accept any rationale from the ZFS devs for why it has to be like that.
There is no reason why a ZFS pool cannot exist without a root dataset that blurs the boundary between them. It reduces flexibility and limits what you can do and has been a source of confusion and frustration for new users. Because of this pointless design, I use what I call “pseudo-root” datasets, and dismiss the root dataset like it doesn’t exist.
In my ideal world, everything would look the same as you see in the post I linked to, with the only difference being the dataset “tank” doesn’t exist. You would have 3 root datasets created on the “tank” pool with no arbitrary “tank” dataset in between.
Setting the datasets individually to RO would work for today’s issue, so its kind of a solution.
I’m disappointed there isnt a quick way to set the entire pool as RO, either in zpool settings, or with a single “readonly” import flag. The alternative is recursively modify every dataset to set or clear an RO flag, and that doesn’t prevent new datasets or changes at pool level.
Functionally that’s not my today’s problem, but it feels like an omission…..
(I’d gladly knock up a PR for it, if it didn’t happen to mean 6 months work learning how to programme python, how middleware works, config database schemas and SQL, how the web UI is created, etc! Some day…..)
There is a reason I said “Top Level Dataset”. Many changes to parent datasets will be inherited by child datasets. I just tested setting a parent dataset as “readonly=on” and it was inherited by the child. This one does survive a reboot or pool export / import.
Thus, in theory, setting the top level dataset to “readonly=on” should cause the entire collection of datasets in the pool to be Read Only.
However, you won’t be able to mount any newly created datasets in any of the other R/O datasets, as it can’t create the mountpoint. But, you can create new datasets that will by default use R/O. So you would have to select a mountpoint not in the other R/O datasets, (or temporarily make them R/W). Then, also temporarily make the new dataset R/W to populate it.
Yes, this R/O attribute does have to be set from the command line. TrueNAS GUI just can’t have every option exposed, it would take years of programming. Plus, massive amount of testing, especially after major updates.
@winnielinnie - What you are asking for is silly. I run all my home computers, except TrueNAS, without the top level dataset mounted. It exists, but is simply a place to create child datasets. For example, "rpool/root" is the place for my ABEs, like "rpool/root/20260215". I don't need "rpool" or "rpool/root" mounted. Just place a place to organize my data. Then "rpool/home" is for the obvious.
While it might be nice for TrueNAS to use such a scheme as you propose, there is also the advantage of hierarchical layers for management of the data.
That’s why it’s a TrueNAS feature request, which happened to be accepted and put on their roadmap.
If TrueNAS, through its UI, can integrate (as a first-class / tested feature) importing a read-only pool, it allows for easier recovery operations for those who are unable to import their pools. No command-line needed. “Pool doesn’t import? Try it again with the read-only box checked and see if you can recover and copy your data somewhere else.”
It also works in tandem with the checkpoint feature, which I also made a request for. You could rewind your pool non-destructively if you check the read-only box during the import wizard.
To use what you said earlier, they would need to automate the following, under the hood, if the option is enabled:
create an alternative path of empty directories to match the dataset hierarchy of the read-only imported pool
use this alternative of empty directories to mount the datasets as read-only
possibly pop up a message for the user/admin that reads "To recover files over SMB or NFS, create shares that point to folders under /recovery/<poolname>
This would happen automatically during the pool import wizard process. Rather than a standard “Import successful” message, it would pop up a message with information about the read-only pool, datasets, and the alternative /recovery paths.