Data recovery from ZVOL snapshot

My setup:
I have two ReServers thanks to Wendell’s vode " The Ideal Home Server! Is it Possible?". One is Alpha-NAS (local), and the other is Omega-NAS (remote). Both have 2x10GB HDDs.

On Alpha-NAS, I have a dataset called ALPHA that contains one ZVOL also named ALPHA. It’s shared to Windows over iSCSI. The volume is protected by daily snapshots with a 100-day retention policy. All snapshots are replicated to Omega-NAS over a VPN. That part works fine.

The problem:
I deleted a single file and need to recover it. Here’s what I did:

  1. Logged into Alpha-NAS → Data ProtectionSnapshots
  2. Selected the appropriate snapshot → Clone to New Dataset
  3. Navigated to Datasets → the cloned dataset appears in the list
  4. Went to Shares → created a new iSCSI target → went through the wizard

Then on Windows:

  1. Opened iSCSI Initiator
  2. Went to Discovery → clicked Refresh
  3. The new target appeared in the list → clicked Connect
  4. It connected successfully

Windows Disk Management, does not recognize partitions. And data in that volume is not even close to what is in the original volume.

After some digging i came to a conclusion, correct me if I’m wrong.

Cloned snapshot doesn’t contain all blocks (no relation to unchanged blocks) only changed ones. I need to Rollback to that snapshot, to be enable to get the data. I don’t have enough free space in that Dataset (ALPHA) that contains this ZFS volume, to fit in it recovered version.

Condense option:

Can I use zfs promote but recover the ZFS to another Dataset (BETA)? How to format the command?

You can zfs send that snapshot e.g. over SSH to a different system and the stream will contain the entire zvol data at the time the snapshot was taken. Or do it locally - if you have a second pool with enough space you can zfs receive it to that one. Any way you will get a full clone of the zvol.

Did you mean “pool” when you wrote “dataset”? I dind your description a bit confusing to be honest.

I can’t post screenshot or link to screenshot :angry:

I prefer to do it locally (internally in NAS) due to the sheer size of ~5TB.

I wish to recover " Magazyn-auto-2025-04-11_00-00-clone " to BETA Dataset.

How to format the entire command?

level1techs . us-east-1 . linodeobjects . com/original/4X/b/8/1/b811e4fdda084587d86d633a96e7ea0a11698261.png

What do you mean by “BETA Dataset”? Please post the output of zfs list - not as a screen shot but as text formatted as code like so:

this
is
code

Thanks. Also “Magazyn-auto-2025-04-11_00-00-clone” is not a snapshot name. Snapshots always contain one “@”.

I can help you if I get proper information :wink:

1 Like

Thy to open my link, just remove spaces around dots.

I am not opening a link to an external image. Please post text. Copy & paste from your shell session is easy.

1 Like
admin@truenas-alpha[~]$ sudo zfs list
[sudo] password for admin: 
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
ALPHA                                                   4.74T  4.23T    96K  /mnt/ALPHA
ALPHA/.system                                           1.95G  4.23T  1.47G  legacy
ALPHA/.system/configs-ae32c386e13840b2bf9c0083275e7941  23.8M  4.23T  23.8M  legacy
ALPHA/.system/cores                                       96K  1024M    96K  legacy
ALPHA/.system/netdata-ae32c386e13840b2bf9c0083275e7941   465M  4.23T   465M  legacy
ALPHA/.system/nfs                                        120K  4.23T   120K  legacy
ALPHA/.system/samba4                                     456K  4.23T   456K  legacy
ALPHA/Magazyn                                           4.74T  4.23T  4.41T  -
ALPHA/Magazyn-auto-2025-04-11_00-00-clone                132K  4.23T  4.53T  -
ALPHA/Magazyn-auto-2025-04-18_00-00-clone                132K  4.23T  4.54T  -
BETA                                                    3.46T  5.51T   104K  /mnt/BETA
BETA/Wideo                                              3.46T  5.51T    96K  /mnt/BETA/Wideo
BETA/Wideo-auto-2025-04-19_00-00-clone                     0B  5.51T    96K  /mnt/BETA/Wideo-auto-2025-04-19_00-00-clone
BETA/Wideo-auto-2025-04-26_00-00-clone                     0B  5.51T    96K  /mnt/BETA/Wideo-auto-2025-04-26_00-00-clone
BETA/Wideo/smb                                          3.46T  5.51T  3.46T  /mnt/BETA/Wideo/smb
boot-pool                                               7.44G   192G    96K  none
boot-pool/ROOT                                          7.38G   192G    96K  none
boot-pool/ROOT/24.04.2                                  2.40G   192G   164M  legacy
boot-pool/ROOT/24.04.2/audit                              80K   192G   672K  /audit
boot-pool/ROOT/24.04.2/conf                              140K   192G   140K  /conf
boot-pool/ROOT/24.04.2/data                              108K   192G   284K  /data
boot-pool/ROOT/24.04.2/etc                              8.05M   192G  6.89M  /etc
boot-pool/ROOT/24.04.2/home                                0B   192G   128K  /home
boot-pool/ROOT/24.04.2/mnt                                96K   192G    96K  /mnt
boot-pool/ROOT/24.04.2/opt                              74.1M   192G  74.1M  /opt
boot-pool/ROOT/24.04.2/root                                8K   192G   476K  /root
boot-pool/ROOT/24.04.2/usr                              2.12G   192G  2.12G  /usr
boot-pool/ROOT/24.04.2/var                              35.2M   192G  33.3M  /var
boot-pool/ROOT/24.04.2/var/ca-certificates                96K   192G    96K  /var/local/ca-certificates
boot-pool/ROOT/24.04.2/var/log                           980K   192G  59.1M  /var/log
boot-pool/ROOT/24.10.0.2                                2.28G   192G   165M  legacy
boot-pool/ROOT/24.10.0.2/audit                            96K   192G   736K  /audit
boot-pool/ROOT/24.10.0.2/conf                           6.83M   192G  6.83M  /conf
boot-pool/ROOT/24.10.0.2/data                            100K   192G   300K  /data
boot-pool/ROOT/24.10.0.2/etc                            7.32M   192G  6.38M  /etc
boot-pool/ROOT/24.10.0.2/home                              0B   192G   148K  /home
boot-pool/ROOT/24.10.0.2/mnt                             104K   192G   104K  /mnt
boot-pool/ROOT/24.10.0.2/opt                              96K   192G    96K  /opt
boot-pool/ROOT/24.10.0.2/root                              8K   192G   488K  /root
boot-pool/ROOT/24.10.0.2/usr                            2.05G   192G  2.05G  /usr
boot-pool/ROOT/24.10.0.2/var                            56.2M   192G  32.9M  /var
boot-pool/ROOT/24.10.0.2/var/ca-certificates              96K   192G    96K  /var/local/ca-certificates
boot-pool/ROOT/24.10.0.2/var/log                          22M   192G  63.6M  /var/log
boot-pool/ROOT/24.10.0.2/var/log/journal                21.8M   192G  21.8M  /var/log/journal
boot-pool/ROOT/24.10.2                                  2.70G   192G   165M  legacy
boot-pool/ROOT/24.10.2/audit                            1.50M   192G   776K  /audit
boot-pool/ROOT/24.10.2/conf                             6.82M   192G  6.82M  /conf
boot-pool/ROOT/24.10.2/data                              804K   192G   292K  /data
boot-pool/ROOT/24.10.2/etc                              7.31M   192G  6.39M  /etc
boot-pool/ROOT/24.10.2/home                              324K   192G   148K  /home
boot-pool/ROOT/24.10.2/mnt                               104K   192G   104K  /mnt
boot-pool/ROOT/24.10.2/opt                                96K   192G    96K  /opt
boot-pool/ROOT/24.10.2/root                              752K   192G   488K  /root
boot-pool/ROOT/24.10.2/usr                              2.39G   192G  2.39G  /usr
boot-pool/ROOT/24.10.2/var                               126M   192G  32.0M  /var
boot-pool/ROOT/24.10.2/var/ca-certificates                96K   192G    96K  /var/local/ca-certificates
boot-pool/ROOT/24.10.2/var/log                          93.3M   192G  64.2M  /var/log
boot-pool/ROOT/24.10.2/var/log/journal                  17.0M   192G  17.0M  /var/log/journal
boot-pool/grub                                          8.20M   192G  8.20M  legacy
admin@truenas-alpha[~]$ 

Which snapshot of Magazyn-auto-2025-04-11_00-00-clone do you want to restore? You can get a list with

zfs list -t snapshot -r ALPHA/Magazyn-auto-2025-04-11_00-00-clone
Magazyn-auto-2025-04-11_00-00-clone

Aaaaa wait.

Magazyn-auto-2025-04-11_00-00-clone

It’s on of the “recovered” snapshot of Magazyn ZVOL, how to list snapshots of
“Magazyn”

zfs list -t snapshot -r ALPHA/Magazyn
admin@truenas-alpha[~]$ sudo zfs list -t snapshot -r ALPHA/Magazyn
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
ALPHA/Magazyn@auto-2025-03-28_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-03-29_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-03-30_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-03-31_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-04-01_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-04-02_00-00     0B      -  4.42T  -
ALPHA/Magazyn@auto-2025-04-03_00-00  83.9M      -  4.42T  -
ALPHA/Magazyn@auto-2025-04-04_00-00  92.6M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-05_00-00   156M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-06_00-00  65.0M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-07_00-00  3.37M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-08_00-00  2.42M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-09_00-00  2.98M      -  4.53T  -
ALPHA/Magazyn@auto-2025-04-10_00-00   157M      -  4.53T  -
**ALPHA/Magazyn@auto-2025-04-11_00-00   133M      -  4.53T  -**
ALPHA/Magazyn@auto-2025-04-12_00-00  44.0M      -  4.53T  -

I wish to recover that snap from 11 April.

When you joined this forum, you got a PM from @TrueNAS-Bot . Find and reply to that PM for a tutorial on using this forum, which will also increase your trust level and allow you to do these things.

1 Like

Hi! To find out what I can do, say @truenas-bot display help.

zfs send ALPHA/Magazyn@auto-2025-04-11_00-00 | zfs receive BETA/Magazyn
zfs set readonly=off BETA/Magazyn
zfs destroy BETA/Magazyn@auto-2025-04-11_00-00

This will give you a full read/write clone without any active snapshot on the pool BETA.

After this operation, I connect this recovered ZVOL via ISCI to Windows and the partitions should be readable? I didn’t mention it earlier, but this volume connected to Windows is internally, i.e. from the Windows side, encrypted with Bitlock.

admin@truenas-alpha[~]$ su root
Password: 
root@truenas-alpha[/home/admin]# zfs send ALPHA/Magazyn@auto-2025-04-11_00-00 | zfs receive BETA/Magazyn
zsh: command not found: zfs
zsh: command not found: zfs
root@truenas-alpha[/home/admin]#

I don’t understand why the ZFS command is sometimes recognized and sometimes not.

You can use /usr/sbin/zfs which will always be found.

The reason it isn’t when that is the case is that /usr/sbin is not in the current search path. It is for root, but only if you perform a full login, e.g. su - root, not only su root.