Last year I bought a MikroTik RDS2216. It meshes the need for cheap fast storage with cheap fast networking, particularly in the SMB and (fancy) homelab space. But unless you’re into BTRFS or MDRAID, the storage side of RouterOS’s house leaves a little to be desired. As they flesh out containers (a.k.a. /app), the RDS’s purpose (and their vision for it) is beginning to make more sense, particularly as a small office appliance that can do pretty much everything (routing, switching, storage server, and onsite cloud-like services). A VAR that’s managing a business’s network could plop one these guys in and set up All the Things™ on one box (NFS/SMB share, NextCloud/SyncThing, HomeAssistant/NodeRed, Jellyfin/Plex, RoundCube, WordPress, etc.)
But in the spirit of BlendTec, I ask “Will it TrueNAS?”
The answer is: Yes. Twice. Mostly.
<background>
I have been running FreeNAS/TrueNAS in VMs on ESXi on two Mac Pro 2009/2012’s for years, meeting both home lab and small business needs. Those machines have chugged along as NFS stores for their physical host, and run in two different locations, replicating their stores to each other.
My low-power M1 Macs and the announcements of ROSE and the RDS2216, drove me to build an ARM64 Ampere server to test the storage features of RouterOS. Broadcom drove me to explore Proxmox. My discovery that someone had compiled TrueNAS for ARM64 led to this experiment.
</background>
The test setup:
- The RDS is connected to everything else by way of a MikroTik CRS520 (100Gb switch)
- My Proxmox hosts are outfitted with 25 and 100G NICs
- My Mac Studio with a 40G NIC in a Thunderbolt enclosure
Expectations for throughput:
- Each NVMe bay on the RDS is limited to 2x8 GT/s of PCIe bandwidth, or ~16Gbps
- I’m testing with a dozen Samsung 900GB NVMe drives (MZQLW960HMJP)
- On-device tests for the Samsung drives max out around 10Gbps
iperf3tests from my Mac to each host and TrueNAS VM maxes around 20Gbps- Tests from the Mac to the ROSE NFS server peak around 9Gbps reads, 6Gbps writes
TrueNAS as VM on RouterOS 7
TL;DR: I used the qemu-arm Docker container and RouterOS 7.22’s custom “/app” feature. It works. You can pass the raw NVMe disks straight through to the VM (with some command line ninja skills), create your ZFS pool, and on-machine results looked pretty good.
I first gave the VM 8 cores and 16GB of RAM, and the URL to the ARM64 ISO of TrueNAS SCALE 24.04. That was relatively painless, I got through the install of everything, and the VM booted up.
Next I tried passing through drives. You have to do some command-line tomfoolery to get the drives recognized and supported by TrueNAS, including manually setting serial numbers, but it saw them and allowed me to build pools.
I tried a couple different ways to get the network into the VM (bridging, NAT, routing). In the end, you end up with 2-3Gbps writes and 6-7Gbps reads from TrueNAS to external clients. (I’m not sure on-machine containers in RouterOS would fare any better since you’d still have to go through the ROS networking stack.)
Nonetheless, if you have a 10G network at home, or have less of a need for high-throughput data and more so for everything else ZFS and TrueNAS bring to the table, it’s possible to do it all-in-one.
TrueNAS as Front End to RDS Using NVMe over TCP
TL;DR: It works very well, especially with a fast SLOG. Single-host transfers hit around 10Gbps in both directions. NVMEoF connection/reconnection has to be configured in the shell. ZFS doesn’t import the pools automatically at boot-up. Might just be a boot-order thing.
For this test I set up connections to TrueNAS VM’s on three Proxmox hosts:
- ARM64 Ampere box with 100G Mellanox CX4 NIC, 128GB RAM, 80 cores @ 3GHz
- MinisForum MS-01 with 100G Mellanox CX4 NIC, 32GB RAM, 16 hybrid cores
- Mac Pro 2009 with Chelsio T6225 25G NIC, 128GB RAM, 12 cores @ 3.47GHz
The TrueNAS VMS are as follows:
- All have NIC virtual functions passed through; bridged networking takes a 10-15% hit
- Ampere VM has 8 cores and 48GB of RAM
- Mac Pro VM has 6 cores and 48GB of RAM
- MS-01 VM has 6 cores and 16GB of RAM
- Ampere VM also has an onboard 7TB Micro NVME passed through for SLOG
For simplicity’s sake I tested by copying a bunch of ISO’s I had sitting around. I have run a bunch of fio tests too, which led to the idea of using a SLOG, particularly for small read/writes.
The end result: SMB transfers to/from my Mac M1 Studio peak at about 10Gbps in either direction. BlackMagic Design disk speed test showed similar results. Multiple copies from the Mac to a SMB share on each VM brought peak interface throughput to about 24Gbps, which is a little better than the iperf3 tests. I haven’t run tests with Proxmox as an NFS/iSCSI/NVMe client of its TrueNAS VM, yet.
As mentioned, I did run some fio tests, both from within the TrueNAS VM, and from MacOS over NFS. In one iteration of tests, I had each TNVM “mount” all 10 of the NVMe drives, which were carved up into three pools: a stripe, a striped mirror, and a Z1 or Z2 array. Then I’d import/export the pools and test inside each VM to see which combination of pool(s) and host(s) did the best.
- As expected, most of the reads came right out of TrueNAS VM’s RAM
- Writes, especially 4K-32K IOPs, benefit enormously from an on-host NVMe SLOG
- The SLOG was only attached to one pool, but IOPS for other pools improved too
- TrueNAS load balanced the IOPs equally across clients, even though one test was on the SLOG-backed pool and the other test was to another pool
- Onboard SATA drives weren’t helpful as a SLOG (some made IOPS worse).
- Performance was comparable, if not better than, a pool of onboard SATA SSDs (the ARM64 host has 12 of them)
Other Observations, Use Cases, and Summary
At present, running TrueNAS on the RDS2216 is possible, but not very performant. If you want to use it as a target for backup/replication, or for home-lab-level of performance for file sharing and what-not, then the Qemu/container approach will work fine, albeit with some complexities. Passing the drives through to a starting container can be a bit wonky, and RouterOS’s ROSE and container/app features are rapidly changing.
Running TrueNAS as a physical or virtual machine in front of an RDS2216 is where I think the rubber meets the road. It’s a great way to add storage to an existing TrueNAS installation without having to crack open the server case. It makes for ease of migration to larger pools, or replication and backup. One TrueNAS instance could front 2 or more RDS2216’s if you wanted.
I hope TrueNAS adds NVMEoF (and iSCSI) initiator support in the GUI.
TrueNAS as a VM allows you to add RAM or CPU cores as needed, as well as migrate to a newer, more capable host. With NVMEoF targets as your VDEVs, and no other hardware passthroughs, you should theoretically be able to even live-migrate a TrueNAS instance. This is especially great for certain HA purposes and for labs.
Another use case in the home/lab space is front-ending any number of lower-capacity, repurposed Linux boxes with any or all of their drives exported as NVME or iSCSI targets. Essentially create a JBOD with an old desktop or server, export its disks over the network, and go. As long as the CPU can handle the max throughput of its drives over the network, that’s all it has to do. (I actually picture testing this out with a bunch of Raspberry Pi 5’s I have outfitted with NVMe drives and 2.5G USB adapters.)
I hope somebody finds this writeup useful. I’m still testing, still trying to push limits. Thank you to the TrueNAS team for an amazing product, and thanks to @Joel0 for building the ARM64 version.