HOWTO: Leverage mbuffer and Netcat for low-powered systems (command-line)

Let’s face it: Some NAS systems are under-powered, yet serve a vital “storage-only” role, which does not require much in the way of CPU and RAM. :frowning_face:

Let’s also face it: Encrypting and decryping a stream over your own local and secure network is redundant, and needlessly cooks your dual-core Celeron. :angry:

Let’s also also face it: OpenSSH has removed the “none” cipher from its default build options. :face_with_symbols_over_mouth:

Let’s also also also also face it: Sometimes doing something in the command-line wins you “nerd points” among your peers. :nerd_face:

With a few simple configurations and built-in commands, whether you use Core or SCALE, you can leverage SSH + mbuffer + Netcat to replicate large datasets over your local network, even if one or both systems are “under-powered”. This can be useful for preparing a “dedicated backup server” with the initial (and largest) replication locally, before moving it offsite or placing it into monthly cold storage.

The problem

Pure SSH

flowchart LR

pc1[["Main Server\n(weak)"]]
pc2[["Backup Server\n(weak)"]]

pc1 --> proc1 -->|"over the\nnetwork"| proc2 --> pc2
pc1 -...-|"authentication"| pc2

The solution

SSH authentication + mbuffer and Netcat

flowchart LR

pc1[["Main Server\n(weak)"]]
pc2[["Backup Server\n(weak)"]]

pc1 -->|"over the\nnetwork"| pc2
pc1 -...-|"authentication"| pc2

This concludes the guide! :wave: You should be able to extrapolate everything from the above charts.

You can keep reading if you like.

For the following examples, the Main server is and the Backup server is

The Main server has a pool named “mainpool” and a pseudo-root dataset named “myrootdata”.

The Backup server has a pool named “backpool”.

Preparing the Backup server

Make sure the following prerequisites are met for the Backup server:

  • The SSH service is enabled
  • You allow “password login” for the root user over SSH. (This can be disabled later.)

Preparing the Main server

Make sure the following prerequisites are met for the Main server:

  • The SSH service is enabled
  • You are able to login via ssh with a user of your choice
  • Your user has an .ssh directory with 700 permissions.

With your user of choice (preferably root or an “admin” account, since I will omit sudo from any and all commands):

  • Login to the Main server via SSH
  • Generate a keypair
ssh-keygen -f $HOME/.ssh/id_weaksauce
  • Send the public key to the Backup server
ssh-copy-id -i $HOME/.ssh/id_weaksauce root@
  • Test your ability for passwordless SSH login
ssh -i $HOME/.ssh/id_weaksauce root@

The replication

From within an SSH session on your Main server, first start a tmux session, so that you do not need to keep the terminal window open at all times.

tmux new -t weakness

Now, with the power of passwordless authentication, tell the Backup server to listen on the local network (Netcat) for an incoming connection, using performant values of mbuffer (which you can adjust based on your own research.)

ssh -n -i $HOME/.ssh/id_weaksauce root@ "nc -l 2525 | mbuffer -s 1M -m 2G | zfs recv -F -s backpool/myrootdata" &

:information_source: It’s easy to miss the single trailing & at the end of the command. This is important. Do not forget it.

:information_source: The Netcat listener on the Backup server, which you started remotely from the Main server, will automatically terminate upon completion of the replication stream.

:information_source: The port to listen on, 2525, sounds cool. “In the year 2525!” :musical_note: There’s no technical reason for this.

:information_source: Change the values, accordingly. I find -s 1M and -m 2G work well for my usage.

:information_source: A “resume token” will be generated on the Backup server, which allows you to resume an interrupted replication. How to leverage this is beyond the scope of this guide.

Still in your tmux session, create a recursive snapshot to use for this one-time migration.

zfs snap -r mainpool/myrootdata@migrate

Still in your tmux session, send the entire replication stream, with all properties, datasets, volumes, snapshots, and clones. This will replicate everything.

zfs send -R -w mainpool/myrootdata@migrate | mbuffer -s 1M -m 2G | nc -w 30 2525

You should soon see a “progress”, as the stream is sent over the network without encryption/decryption, using mbuffer to (hopefully) maintain a stable performance.

:information_source: You can gauge the progress based on the “used” size of the entire pseudo-root dataset, and how much has transferred in the progress monitor.

To safely exit the tmux session, issue CTRL + B, then press D.

To list active tmux sessions:

tmux ls

To re-attach to the tmux session:

tmux a -t weakness

The aftermath

Once the replication is complete, the Backup server’s Netcat listener will have automatically terminated itself.

Be sure to check the Backup server’s pool, datasets, and snapshots. Try to access the data.

:skull_and_crossbones: :biohazard: :stop_sign:
This guide is 99.9999% likely filled with errors and typos.

Rather than ridicule me and inform me how incompetent I am, I would prefer if you ridicule me and inform me how incompetent I am, and point out the mistakes so that I can correct them.


Apologies, according to my spouse, I am quite dense, so please bear with me.

Isn’t this very similar to ssh+netcat built into the GUI? Ie the delta is that you’re eliminating the SSH tunnel encryption but still using SSH authentication?

I wonder how much SSH adds in terms of TCP / IP overhead vs. a non-encrypted stream. Especially for an encrypted pool, running a SSH tunnel inside a wire guard tunnel to deliver encrypted content to a remote server seems like a lot of overhead for relatively little gain… so I can see the potential benefit but would love to know how much it helps re: throughput.

Yes, but using the command-line offers some additional benefits:

  • It empowers users to learn how to work with the command-line and demystifies what happens under the GUI.
  • The same steps can be used for any ZFS server, even if the sender is not TrueNAS.

Quite a good deal for low-end systems, such as my poor baby dual-core Celeron that serves as a dedicated backup server. Even more of a difference if both sides are under-powered.

As you noted, if the datasets are already encrypted (or need further encryption for non-raw streams), it adds yet another redundant extra overhead.

For a local network? It’s overkill.

The difference will probably be even more noticeable if you’re exclusively using SSDs / NVMEs and network interfaces beyond 1 GbE.

1 Like

I’m a bit confused though… the replication task setup GUI help window suggests that the SSH+NETCAT option results in a unencrypted datastream just like your approach, or am I missing something?

1 Like

You asked if it was similar to using the GUI’s built-in SSH+Netcat, and I said “yes”.

I never said they were fundamentally different. (I just added context for some benefits of command-line over GUI.)

1 Like

Does the gui use mbuffer? Is mbuffer just used to provide a larger buffer between a lumpy producer? Is it necessary?

Missed that part and got even more confused as I perused the help menu. Apologies! :innocent:

Agree that it’s good to know how the stuff under the hood works. After all, we have to know a lot more about ZFS when it comes to trouble-shooting when issues arise than the GUI cares to cover.

1 Like

Not sure. :thinking:

I can only speak for myself (and reading the experiences of others), that it has produced a more consistent speed across the wire[1], and I would assume this is especially useful if sending from an NVMe/SSD pool to an HDD pool.

But TrueNAS may in fact use mbuffer (or some other buffering mechanism) under-the-hood.

  1. In my own usage, it’s not even a “local” network, but more specifically a direct connection of a single Cat6A ethernet cable from one server’s port to the other server’s port. I also enable “MTU 9000”, since it only involves the two servers directly via dedicated NICs. ↩︎

When I did a local SSH+Netcat transfer between local servers the transfer speed was 340-400MB/s even as standard SMB transfers between a NVME disk on 10GbE and the NAS lagged behind at 250MB/s, max. Much of that content was compressed already so I doubt that compression had any impact. Goes to show how efficient the SSH+Netcat replication task is compared to standard SMB, etc.

1 Like

Netcat wouldn’t be needed (theoretically), if OpenSSH was still built with the “none” cipher included by default.

In such a world, you could specify “none” as the cipher, and if both servers support it? It’s essentially the same as authenticating with SSH + transferring via Netcat.

Yes, when replicating to my potato based backup server, AES would limit me to 30MB/s :wink: