Automatic ZVOL Snapshots :( !? and other weird stuff

Trying to create back-ups from a (VM) snapshot, I am facing strange things, to mention some of them:

  • I need automatic snapshots as base for rsync or replication tasks and … that is not possible! You can only create automatic snapshots for a dataset! which is very strange, since you can create a manual snapshot from a zvol! Of course you can work around this by creating the zvol in a dedicated dataset but never the less very very weird!!
  • If you create a Rsync task in the left corner there is source !!! Which is definitively wrong!!! It is Local not source
  • Recursive is of course related to the source so it should be near source in the Gui
  • Replication: ssh is related to transport options
  • Weird that the name field in Rsync an Replication tasks overview is so short, that you absolutely can not see which task is which one
  • strange that you can not clean up the jobs list other than via a reboot …

Just some thoughts …

Zvols are not visible in the filesystem so other than manual scripting dumping them into a file with zfs send, then rsync for that file, you simply cannot copy them with rsync.

That’s what ZFS replication tasks are for - which among other good things keep the entire snapshot history on the destination if desired, allow you to set different retention times for source and destination, copy only certain snapshots, e.g. snapshot hourly, but replicate only once per day, etc. etc.

  • place all Zvols you want to backup in a comon dataset
  • create a recursive ZFS replication task for that dataset

Done. :slightly_smiling_face:

Patrick,

I all ready discovered that Rsync does not work for zvols. So I was and am trying Replication as an alternative.

However where things work perfect within one pool, it becomes harder between pools and … I did not yet manage between TrueNas systems. And … I do not like the fact that the only option is to use the root account.

What does work are simple cli commands like
zfs send oldpool/path | zfs receive newpool/path/to/zvol

But where the Cli is simple, using the GUI is a drama. At least I did spent hours but I did not yet manage, not for zvols.

I have three TrueNAS systems - two CORE, one SCALE - and I run replication on the same NAS, from CORE to CORE, and from SCALE to CORE.

All working flawlessly for years.

root to root is IMHO not a problem when the NAS systems are not open to the Internet - all in private environments connected via VPN here - and password authentication is disabled, only public key allowed.

I can post a summary of my tasks if you are interested.

I do not have much time today. Will do further tries tomorrow. Since I concentrate completely on replication now. I hope I will be successful.

Rync works (for files). Note that:

  • I do not use the root account there
  • pull from scale (to a core system)
  • do generate keys using bitvise
  • import the keys on scale and core
  • placing the PK in the users home directory/.ssh/id_rsa (hum what a name)
  • change-ing the home dir to 751 and the key tp 600
  • and then remove the password option
  • I did create a user groep having access to the source (on core) and destination (on scale). And added that group to the account used

Relatively complex but that works

Related to replication

  • I still think it is more than bizar that snapshot tasks can not create ZVOL snapshots !! To overcome this I place the ZVOL in an (dedicated) dataset

I did create keys using bitvise (fantastic file server and client)
And verified if I could log into both systems using the bitvise client

  • created SSH Connection’s (under credentials backup) one of them using source root as user (I think there is no way around that)
  • created snapshot tasks on the source system for the dataset containing the zvol
  • took care that source and destination path is accessible

I do not know what is still wrong, it is more complicated as it IMHO should be and the gui is IMHO … not optimal

One of the problems is perhaps that … I do not like at all to define the account of root user of system A on system B. So I did try to work around that …

I did find the problem …

Only Replicate Snapshots Matching Schedule … I did assume / read that as only replicate Snapshots corresponding with "matching naming scheme^

So I had ^Only Replicate Snapshots Matching Schedule^ checked, but that has somehow something to to with the local schedule …

Glad I found the issue. But I am not at all surprised that it took me many hours to get the sync up and running

Yeah … not debating the UX.

But then zetarepl is a really powerful tool. It’s really difficult to find a good balance of simple setup with reasonable defaults and OTOH not limiting users so much, they won’t use it for that reason.

For me TrueNAS snapshots and replication work perfectly. I bet my company on it :wink: We run TrueNAS CORE as our main hypervisor platform including both our domain controllers.

Caveat: I only ever used push. I like to be in control of source and destination retention from a single point.

Note that I normally would prefer push, since generally spoken the source is the data owner.

However since push would imply that I would have to give the source the root private key / access to the destination … which IMHO is even worse, I did decide to go for pull. Given the fact that the destination (the scale system) is my main system at the moment.

Perhaps I could work around using admin/root access on scale giving a user sudo without password, but that is also a terrible idea