How to recover from rm -rf /* ? (lost everything)

This is is up to you to let us know if you remember a substantial difference? Take for example the TankPrincipal/Photos dataset. Does that appear “good” for it to consume 179 GiB? Does it seem about right before the rm command?

You can also get more nuanced information with the -o space parameter:

zfs list -r -t filesystem -o space TankPrincipal

Unfortunately the 128K doesn’t give me a lot of hope that the contents were restored, since you hit the Ctrl-C somewhere in the middle of Documents_Pro. Rewinding to a state before there is a valid uberblock (ie: earlier than this) may not be possible even with disabling a LOT of safeguards.

With that said, it does show 11.1G used in the base TankPrincipal/Documents :thinking:

You may be able to do sudo zfs mount -a -o readonly=on to mount them all and take a look.

… You’re right, I thought you meant “it’s not going to actually import it at all”

1 Like

Actually, all the USED column make sense :

TankPrincipal/Documents/Documents_Famille                          128K  4.09T                                                                                                                128K  /mnt/TankPrincipal/Documents/Documents_Famille
TankPrincipal/Documents/Documents_Perso                            128K  4.09T                                                                                                                128K  /mnt/TankPrincipal/Documents/Documents_Perso
TankPrincipal/Documents/Documents_Pro                             5.57G  4.09T                                                                                                               5.57G  /mnt/TankPrincipal/Documents/Documents_Pro

The two first directories were empty. The Datat in the Documents_Pro were supposed to be organized in the those two first directories. I imported them from a usb key few days ago.

The sizes of all other dataset make sense.

Maybe the “Telechargements” forlder was larger but I’m not even sure about that and it’s not an important forlder.

So with that -N option all this sounds good ?

1 Like

@winnielinnie Yes seriously all those sizes look ok

Here is the output of the command :

root@truenas[~]# 2024 Nov 26 12:01:12 truenas Device: /dev/sdd [SAT], 65 Currently unreadable (pending) sectors

root@truenas[~]# zfs list -r -t filesystem -o space TankPrincipal
NAME                                                              AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
TankPrincipal                                                     4.09T  3.03T        0B   8.82G             0B      3.02T
TankPrincipal/.system                                             4.09T  1.67G        0B   1.15G             0B       537M
TankPrincipal/.system/configs-66311c036e824820af44b2dbf4c55f10    4.09T  85.1M        0B   85.1M             0B         0B
TankPrincipal/.system/cores                                       1024M   117K        0B    117K             0B         0B
TankPrincipal/.system/netdata-66311c036e824820af44b2dbf4c55f10    4.09T   416M        0B    416M             0B         0B
TankPrincipal/.system/nfs                                         4.09T   165K        0B    165K             0B         0B
TankPrincipal/.system/rrd-66311c036e824820af44b2dbf4c55f10        4.09T  23.8M        0B   23.8M             0B         0B
TankPrincipal/.system/samba4                                      4.09T  3.05M     2.40M    666K             0B         0B
TankPrincipal/.system/services                                    4.09T   128K        0B    128K             0B         0B
TankPrincipal/.system/syslog-66311c036e824820af44b2dbf4c55f10     4.09T  8.30M        0B   8.30M             0B         0B
TankPrincipal/.system/webui                                       4.09T   117K        0B    117K             0B         0B
TankPrincipal/MyName                                             4.09T   231G        0B    231G             0B         0B
TankPrincipal/Documents                                           4.09T  11.1G        0B   5.57G             0B      5.57G
TankPrincipal/Documents/Documents_Famille                         4.09T   128K        0B    128K             0B         0B
TankPrincipal/Documents/Documents_Perso                           4.09T   128K        0B    128K             0B         0B
TankPrincipal/Documents/Documents_Pro                             4.09T  5.57G        0B   5.57G             0B         0B
TankPrincipal/Immich                                              4.09T   523M        0B    224K             0B       523M
TankPrincipal/Immich/Backups                                      4.09T   128K        0B    128K             0B         0B
TankPrincipal/Immich/Library                                      4.09T   128K        0B    128K             0B         0B
TankPrincipal/Immich/PostgreSQL                                   4.09T   522M        0B    522M             0B         0B
TankPrincipal/Immich/Profile                                      4.09T   128K        0B    128K             0B         0B
TankPrincipal/Immich/Thumbs                                       4.09T   128K        0B    128K             0B         0B
TankPrincipal/Immich/Uploads                                      4.09T   128K        0B    128K             0B         0B
TankPrincipal/Media                                               4.09T   302G        0B    302G             0B         0B
TankPrincipal/Photos                                              4.09T   179G        0B    179G             0B         0B
TankPrincipal/Sauvegardes                                         4.09T  1.89T     92.5M   1.89T             0B         0B
TankPrincipal/Telechargements                                     4.09T   357G        0B    357G             0B         0B
TankPrincipal/VMs                                                 4.09T  71.1G        0B    128K             0B      71.1G
TankPrincipal/Videos                                              4.09T   128K        0B    128K             0B         0B
TankPrincipal/ix-apps                                             4.09T  4.87G        0B    160K             0B      4.87G
TankPrincipal/ix-apps/app_configs                                 4.09T  2.06M        0B   2.06M             0B         0B
TankPrincipal/ix-apps/app_mounts                                  4.09T  7.27M        0B    128K             0B      7.15M
TankPrincipal/ix-apps/app_mounts/qbittorrent                      4.09T  6.39M        0B    139K             0B      6.26M
TankPrincipal/ix-apps/app_mounts/qbittorrent/config               4.09T  6.13M      448K   5.70M             0B         0B
TankPrincipal/ix-apps/app_mounts/qbittorrent/downloads            4.09T   128K        0B    128K             0B         0B
TankPrincipal/ix-apps/app_mounts/tailscale                        4.09T   304K        0B    128K             0B       176K
TankPrincipal/ix-apps/app_mounts/tailscale/state                  4.09T   176K        0B    176K             0B         0B
TankPrincipal/ix-apps/app_mounts/transmission                     4.09T   469K     85.2K    128K             0B       256K
TankPrincipal/ix-apps/app_mounts/transmission/config              4.09T   128K        0B    128K             0B         0B
TankPrincipal/ix-apps/app_mounts/transmission/downloads_complete  4.09T   128K        0B    128K             0B         0B
TankPrincipal/ix-apps/docker                                      4.09T  4.79G        0B   4.79G             0B         0B
TankPrincipal/ix-apps/truenas_catalog                             4.09T  74.5M        0B   74.5M             0B         0B
root@truenas[~]#

1 Like

That’s promising to hear!

Not to rain on your parade, but since when was sdd reporting unreadable sectors?

I seriously hope you’re not also dealing with a potentially failing drive at the same time as you’re going through this recovery process. :flushed:

@winnielinnie Yes I need to deal with that drive, there were only 2 bad sectors before all this. But the Pool was Healthy

EDIT : by the way, what’s the difference between USED and USEDDS ?

1 Like

USED includes snapshots and child datasets. Think of it as a “recursive calculation” starting from that dataset.

USEDDS is only the dataset itself. It does not factor snapshots or children.

So if you a see a dataset with USEDDS at almost nothing, but a very high USED value, then it means that no data is saved directly in the dataset itself, but rather saved in a child dataset somewhere lower down the chain and/or has space taken up by snapshots that are still holding on to “deleted” data.

1 Like

Okkk ! Thx a lot !

I’m getting excited because those numbers make more sense !

Give the sudo zfs mount -a -o readonly=on command a try and see if you can browse over the command-line to /mnt/TankPrincipal

If you’re able to do this and you see the files there, then I can give you a command sequence that will commit the rollback. If you can’t see the files, then let us know - you might need to export and then re-import without the -N flag.

1 Like

@HoneyBadger Thanks for the command.

So I can enter the command right away ? No need to “unmount” those datasets or something ?

sudo zfs mount -a -o readonly=on &

Maybe I can add the ‘&’ to free Putty after the command ?

Now that teh pool is imported, mounting should be instantaneous.
If there’s a defective disk in the mix, I would backup everything as soon as it’s mounted.

1 Like

Ok so here is the output :

root@truenas[~]# sudo zfs mount -a -o readonly=on
cannot mount 'TankPrincipal': Invalid argument
root@truenas[~]#

Check with sudo zfs mount to see if it picked anything else up, and just errored on the main TankPrincipal itself because the pool itself is mounted.

root@truenas[~]# sudo zfs mount
boot-pool/ROOT/24.10.0.2        /
boot-pool/ROOT/24.10.0.2/audit  /audit
boot-pool/ROOT/24.10.0.2/conf   /conf
boot-pool/ROOT/24.10.0.2/data   /data
boot-pool/ROOT/24.10.0.2/etc    /etc
boot-pool/ROOT/24.10.0.2/home   /home
boot-pool/ROOT/24.10.0.2/mnt    /mnt
boot-pool/ROOT/24.10.0.2/opt    /opt
boot-pool/ROOT/24.10.0.2/root   /root
boot-pool/ROOT/24.10.0.2/usr    /usr
boot-pool/ROOT/24.10.0.2/var    /var
boot-pool/ROOT/24.10.0.2/var/ca-certificates  /var/local/ca-certificates
boot-pool/ROOT/24.10.0.2/var/log  /var/log
boot-pool/ROOT/24.10.0.2/var/log/journal  /var/log/journal
boot-pool/grub                  /boot/grub
boot-pool/.system               /var/db/system
boot-pool/.system/cores         /var/db/system/cores
boot-pool/.system/nfs           /var/db/system/nfs
boot-pool/.system/samba4        /var/db/system/samba4
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  /var/db/system/configs-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941  /var/db/system/netdata
boot-pool/.system/cores         /var/lib/systemd/coredump
root@truenas[~]#

I don’t think the mount parameter readonly is required when the pool is imported as read-only. ZFS will (should) autodetect this and automatically mount the datasets as read-only (even without the parameter specified.)

Other than that, I think for mounting, it is readonly=ro

1 Like

Yeah, I was just thinking that the pool level readonly should make it safe to just sudo zfs mount -a here.

And readonly=on is valid at least according to the docs:

https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops.7.html#readonly

That’s the dataset property.

For the mount command, I believe it is readonly=ro

1 Like

I’m hoping we can get to the point of confirming the contents via CLI here without having to redo that very long mount/replay process - then we only have to do it once more without the -o readonly=on and let it rewind.

Happy to be proven wrong, but I think we’re in agreement that the pool-level readonly should prevent any attempts to write to a dataset regardless.

1 Like

Hah! We were both wrong!

It accepts -ro or -rw as temporary mount option overrides, in the same syntax as the legacy non-ZFS mount command.

I just tested and confirmed this.

So neither readonly=X works for a mount parameter. (The dataset and pool properties are different.)

As for the pool being imported as read-only, I just confirmed that this will indeed mount the datasets as read-only, without any option or override.[1] (So it seems ZFS auto-detects that the pool is imported in a read-only state.)


  1. I tested this on Arch Linux, not TrueNAS. Other than the presence of a System Dataset, I’m not sure what else might differ. ↩︎

3 Likes

>I run Arch BTW
:wink:

@Berboo you should be able to issue sudo zfs mount -a and then check to see if it will populate the contents of /mnt/TankPrincipal for you

2 Likes