Replication Help

Hi All,

So I have 3 TrueNAS Scale setups running. 2 Are on site, one will be offsite, once this is all sorted out.

My Two onsite TrueNAS can replicate without any issue at all. I cannot push or pull to my TrueNAS that will be offsite.

There is a site-to-site openvpn sever on each end of the setup. Can ping, communicate, do an SMB transfer from onsite truenas to offisite truenas, 0 issues at all. The problem seems to lie within TrueNAS replication and I cannot for the life of me sort it out after 4 days. I can ssh root@ the remote remote truenas and log in no issue with the key and vice versa. I have tried SSH. SSH+NETCAT. All the same result.

SOMETIMES, depending on what I have setup, it will run and then fail after 5 or 6 minutes. -.- I do not think

I have had timeout errors, certificate errors, but now it just seems to be connection time outs. It makes no sense to me at this point. Been working on it for too long I think. Any assistance and feedback would be greatly appreciated.

While writing this… I was able to setup a working Pull (ssh) from the remote truenas. Still cannot do a push from the local.

Firewall is disabled on the remote side (its still on my network) to rule out those issues.

I have seen error like this with OpenVPN & MTU issues.
I’m not saying this is your issue , but i had some weird issues - But only with ie. Citrix Client.

Turned out to be a MTU issue …
The below would “reduce the Client OpenVPN” packet (MTU) size to 1400 bytes.

fragment 1400
mssfix 1400

It MUST be put in the Server Config, and in EVERY client config used for connecting to the specific OpenVPN server.

And watch out …
You should apply it to the remote clients first (you have one shot) , as the client will not connect to the server until the server also have the same “Custom settings” set.

If it solves your issue , you could try to adjust the 1400 upwards , to get the largest working VPN Tunnel MTU.

Hint: For chasing OpenVPN MTU issues

Thanks you for your reply! I have seen that article and played with that but it does not seem to be a contributing factor that I can tell as even with those options on both clients, it doesn’t work.

Whats interesting is I can see data moving from the local interface and I see it incoming on the remote interface BUT, the VPN tunnel on each side doesn’t show data moving and now I get an error shown here. Sigh. its never ending. lol
Passive Side

I’m still a TrueNAS beginner.
I haven’t even tried to replicate yet …

But you can ping your remote site from your “local TrueNas” with a 1500 byte package like this

ping -f <remote ip> -l 1500

Hmm …
Actually it seems like the above page is wrong about the “don’t fragment option”, and package length (on linux at least) …

Try

ping -Mdo -s1472 <remote ip>    -  Works here
ping -Mdo -s1473 <remote ip>    -  Fails here

1500 - 20 (ip) - 8 (icmp) = 1472

Edit:
Are you using jumbo (9K) frames on any of the sites ??

Thanks for the reply again! See the more recent screenshots. Different errors. lol

I have also attached a screenshot of my custom options on both sides. Jumbo frames are not enabled anywhere to keep life uncomplicated.

Those two screenshots of it saying its running, its not actually. No data is moving. Its odd behavior.



![task is running but its not2|690x327]

Did you try to find the max size you can ping (from both server ends) ??

ping -Mdo -s1472 <remote ip>    -  Works here (local lan)

Re: MTU
I still think this could be a MTU issue …
Even though you’re right, that it’s strange that “it seems to run”

As have read the options.
https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage

fragment & mssfix are the best way to deal with MTU issues

For a test i’d try to set both to 1200, and if successfull.
Work my way up towards 1500.

If 1200 doesn’t fix it, i’d agree that it might not be MTU related.
And i would bring the box back to local Lan to verify that it still works locally.

Well… I sorted it out. And its on me. I feel like a bit of an idiot but, hopefully this will help someone else in the future.

Because I am testing this within my own network, I had turned off in pfSense “Block private networks and loopback addresses” and “Block bogon networks” on the “remote” WAN interface for testing but seemingly forgot to on the local side. Ugh.

So now it works without any extra options for OpenVPN. I will take it to its offsite home later this week and then we’ll see what happens. At that point I may need to revisit the MTU stuff.

1:
pfSense logs must have been “screaming” at you.

2:
I doubt you needed to disable bogons on the pfS WAN , just Private (RFC1918)

But glad it worked out for you.
And for reporting the solution.

The OpenVPN fixes above, were actually taken from my pfSense prod. environment.