This is is up to you to let us know if you remember a substantial difference? Take for example the TankPrincipal/Photos dataset. Does that appear “good” for it to consume 179 GiB? Does it seem about right before the rm command?
You can also get more nuanced information with the -o space parameter:
Unfortunately the 128K doesn’t give me a lot of hope that the contents were restored, since you hit the Ctrl-C somewhere in the middle of Documents_Pro. Rewinding to a state before there is a valid uberblock (ie: earlier than this) may not be possible even with disabling a LOT of safeguards.
With that said, it does show 11.1G used in the base TankPrincipal/Documents
You may be able to do sudo zfs mount -a -o readonly=on to mount them all and take a look.
… You’re right, I thought you meant “it’s not going to actually import it at all”
The two first directories were empty. The Datat in the Documents_Pro were supposed to be organized in the those two first directories. I imported them from a usb key few days ago.
The sizes of all other dataset make sense.
Maybe the “Telechargements” forlder was larger but I’m not even sure about that and it’s not an important forlder.
USED includes snapshots and child datasets. Think of it as a “recursive calculation” starting from that dataset.
USEDDS is only the dataset itself. It does not factor snapshots or children.
So if you a see a dataset with USEDDS at almost nothing, but a very high USED value, then it means that no data is saved directly in the dataset itself, but rather saved in a child dataset somewhere lower down the chain and/or has space taken up by snapshots that are still holding on to “deleted” data.
Give the sudo zfs mount -a -o readonly=on command a try and see if you can browse over the command-line to /mnt/TankPrincipal
If you’re able to do this and you see the files there, then I can give you a command sequence that will commit the rollback. If you can’t see the files, then let us know - you might need to export and then re-import without the -N flag.
Now that teh pool is imported, mounting should be instantaneous.
If there’s a defective disk in the mix, I would backup everything as soon as it’s mounted.
I don’t think the mount parameter readonly is required when the pool is imported as read-only. ZFS will (should) autodetect this and automatically mount the datasets as read-only (even without the parameter specified.)
Other than that, I think for mounting, it is readonly=ro
I’m hoping we can get to the point of confirming the contents via CLI here without having to redo that very long mount/replay process - then we only have to do it once more without the -o readonly=on and let it rewind.
Happy to be proven wrong, but I think we’re in agreement that the pool-level readonly should prevent any attempts to write to a dataset regardless.
So neither readonly=X works for a mount parameter. (The dataset and pool properties are different.)
As for the pool being imported as read-only, I just confirmed that this will indeed mount the datasets as read-only, without any option or override.[1] (So it seems ZFS auto-detects that the pool is imported in a read-only state.)
I tested this on Arch Linux, not TrueNAS. Other than the presence of a System Dataset, I’m not sure what else might differ. ↩︎