For several years I have run snapshots and some replication jobs of all datasets between two TrueNAS servers with complete success.
Unfortunately, for organizational reasons, I had to give up one of the servers based on TrueNAS and rebuild the software and hardware.
The background is as follows… the source is a TN Scale server. There is still an SFP+ direct connection between this and the destination server, which has been given a fat server installation of Windows 2025 Server. The Windows Server has various RAIDs running where these datasets are to be stored. The server serves as a central Veeam instance with a tape library attached. After long tests over several months, I was not able to map this environment sufficiently performantly under TrueNAS. So I had to physically sacrifice it and turn it around.
SSH is set up on both sides, on the TN Scale side the remote host key can be retrieved via the backup credentials, and an SSH test connection via Scale shell is also successful.
My (understanding) problem is that I can’t get anything listed on the destination side of the replication job with the IP specified.
Is it possible that I’m doing something wrong out of habit and can’t set up a replication job in Windows?
Or do I have to switch to Rsync tasks altogether? I was advised against this between the TN servers at the time and I also noticed that, at least under the circumstances at the time, this was galaxies slower with an Rsync than with a replication task.
No, I didn’t think of that, and I think I would prefer not to experiment with such things and stay as native as possible.
Just one example: if I saw advantages in file systems other than NTFS, there would be the possibility of creating the destination data repositories as ReFS using the onboard tools.
But even that has been discussed sufficiently in Veeam spheres. Even if some people have different opinions on the matter, the consensus under Windows is usually in favor of NTFS.
I have another idea, but it is not fully thought through yet and I am not yet very familiar with Veeam, but it is as follows:
In the event that the replication as described above with the destination TN Scale → Windows Server does not work, the repo could be opened via SMB shares with VSS enabled flag on TN.
The server is then integrated under Veeam via the SFP+ direct connection and a task for a full and then incremental is stored on one of the respective RAID destinations on the Windows server.
The whole thing relates to files that are stored on the TN server in its datasets, and would be closest to the original replication idea, even if Veeam itself takes control of it.
From this point on, I can then continue with offline backups, etc.
I am not yet sure how I can back up both VMs that run under TN Scale under Veeam, whether in the same way or differently.
Would that be a better alternative than an Rsync?
Because you don’t necessarily have to force TN Scale to do a push if the result would only be possible with an Rsync instead of a replication job and would then be even slower.
@scyto has a great suggestion, but looking at the ZFS for Windows repo I am really uncertain how ready ZFS for Windows is for production use. The number of maintainers seems small, development seems lethargic, and it does not appear to be kept frequently up to date with the OpenZFS source.
Have you thought about running a Linux or TrueNAS image as a Virtual Machine on the Windows Server box in order to have standard OpenZFS to replicate to?
If you do need to use rsync…
ZFS replication is faster because it is based on snapshots and 1) Snapshots are block based, so only changed blocks are sent rather than entire changed files, and 2) ZFS knows what has changed solely by querying the remote snapshot name and so can stream the changes at full speed, whereas rsync has to query the details of the file stored at the other end to know whether to send it and then send the file individually which is a very interactive and thus slower network protocol.
In theory Rsync requires a server at one end and a client at the other, but I have made it work also using SMB (though you may lose ACL permissions if you SMB).
TrueNAS Scale has an Rsync client built in, but in current releases does NOT include Rsync server as standard - you need to install it as an app.
So you will need to decide how to run rsync between TrueNAS and Windows, and where to initiate the transfer by a cron / Task Scheduler job.
Thanks for your feedback @Protopia
No, I didn’t think of that because I wanted to avoid it from the start.
The reason is that the server’s specs are not sufficient to run a virtualized instance at full performance.
I tried the same thing the other way around for months under TN Scale & Core with SAS controller passthrough and other things with a Windows Server VM for Veeam and tested myself to death.
Even the data stream for the tape library could not be kept stable at a level, and the resources given for the abre Metal OS in connection with the OS in the Type 2 hypervisor was a naive calculation that realistically always ends up wrong under the circumstances.
On the other hand, neither TrueNAS nor Veeam are recommended to be virtualized.
Yes, that’s how I remember it with Rsync too, until many years ago, when I was not recommended to run a replication job between the TN servers instead.
So I’m getting closer and closer to the assumption that a replication job from TN Scale → Windows Server is not possible at all?
@Protopia what do you think about the SMB + VSS pull task idea controlled via Veeam as described in my previous answer to @scyto?
Sorry - but I don’t have any experience with Veeam (and my tape backup skills are a couple of decades old).
However, I am unclear why you want to both replicate online to a remote location (a branch location where you cannot add more servers??) and also back up to tape there.
Can you implement tape backup locally as an alternative, and maybe courier the tapes to the remote site?
No, my equipment is here at home, all in the same room.
This is not about targeting a remote location.
You have to imagine it as a physical server like this:
1st instance → (TN Scale) main unit with slow and fast storage.
2nd instance → (Windows 2025 Server) VBR server with local TN Scale dataset copy
3rd instance → (tape library) is connected to the VBR server via SAS and is controlled via it. Offline backups are fed into the VBR server via the local TN Scale repo.
Among other things, there is a direct 10GBit interconnect between instances 1 and 2.
This setup enables me, for example, to run tape jobs that take significantly longer without instance No. 1 having to be switched on. This is because the copy is available as a local repository.
The question here is how best to do this if we assume that replication from TN Scale → Windows Server is not possible?
As far as I know, the only options left are either a push controlled via Rsync from TN Scale → Windows Server, or a pull controlled by Veeam via SMB (possibly with VSS/FSRVP enabled) as an unstructured data source job.
However, it would be beneficial to have a definitive statement beforehand that a TN Scale replication job cannot have a Windows Server as its destination, but only a TrueNAS * or ZFS.
One option would be to use Syncthing as pseudo replacement for ZFS Send/Receive https://syncthing.net/
You can run it as an App on SCALE
Where you’d effectively have Windows’ Syncthing instance push to TrueNAS (or visa-versa or bi-directionally) where you could then continue to have the snapshots you know and love on the same dataset. This topology would work around alot of the shortcomings of Rsync.
I checked the syncthing links, but i still don’t get it right now
I already gave up the idea of Rsync.
Still thinking about simple SMB mount and Veeam pull job to fill up the new local disk repos first in a full, then incremental.
As far as I understand, OpenZFS on OS X is a one-man operation, and OpenZFS on Windows is a side activity on this. And the developer has a day job. In Tokyo. Which probably means that said day job extends into the night.
So “production ready” is a few light-years away.
Syncthing includes the ability to maintain ownership and extend attributes during transfers between nodes (systems). This ensures ACLs and permissions remain consistent across TrueNAS SCALE systems during one and bi-directional Syncthing moves. You can configure this setting on a per folder basis.
But…
in the case of a disaster like the TN Scale Server would burn, i think keeping the ACLs is nice to have, but not really a pro-argument IMHO.
In this moment having multiple backups of core data even without this metadata would be more important.
This is also why i’m still thinking about doing it via SMB and deal with the missing metadata.
And who knows, when talking about tape achives aso, i have no warranty that in 10, 15 or more years i’m still running a Truenas system, or Syncthing will still be operational?
My fallback is Veeam, and i try to keep it as simple as possible.
I build up two physical Veeam server and one virtualized.
Only one machine is operational all the time, others are inactive, on hold.
The 2nd physical machine is even not here, packaged in the cellar if everything else is already dead, to access offline backups on tape as the last line of data defense.
The Veeam database is/will be saved multiple times and can be shared around to every instance to recall the whole environment.
I’ve had to learn the hard way all too often not to make things too complicated, not to use any additional tools or similar, no matter how great, to use established systems and to rely on file systems or protocols that are so standard that you’re fed up with them.
And when it comes to hardware, diversify and have a broad range of options.
So if TN Scale now plans to use board-based replication only between Truenas systems, then so be it. I can only be grateful for the recognition
Forcing this with Syncthing somehow, well, I’m a bit worried.
For me, SMB is the only option left.
This is not a SCALE thing. Replication is a ZFS feature: It can only work with ZFS on both ends.
If one end is not ZFS, you’re stuck with rsync or similar programs, including Syncthings: Something which will painstainkingly compares files on both ends and copy whatever has changed.
Yes, I’m sorry, I expressed it incorrectly.
But you got to the point.
This is what i try to do with simple SMB mount and Veeam pull job to fill up the new local disk repos first in a full, then incremental according to the findings.
ZFS replication is by far the best solution, it needs ZFS at both ends, ZFS for Windows is not production ready, so the only solution is to run ZFS in a Linux virtual machine under Windows.
Does anyone have an idea why every time the local user “test” logs on to the SMB share, another one is created on the dataset “backup” below the dataset?
This is new to me. So far I have only regulated the permissions with AD-integrated groups on POSIX ACLs in the datasets, where this did not happen with the group members.
But this seems to be the case with the local TrueNAS user.
I have looked at the user, the SMB service, the share and the dataset configuration - but I cannot find any indication of how to suppress this.