TrueNAS on ARM - Now Available

One could stick Mac mini inside of a supermicro jbod chassis and hot glue cables to enclosure :slight_smile:

Definitely. Plenty.

The driving force here, however, is pure power savings, for users in high cost of living areas. I pay $0.52/kWh here, and if I could save 40 watts (~200/year) off of my Supermicro server by swapping an Intel C262-based MLB+CPU+RAM with a Mac mini – I would jump into taking risk for any possible shenanigans head first.

This is correct.

Sleep/wake is a universally hard problem to solve and I"d just disable sleep, ASPM, and anything else that increases risk of glitches all to save two cents in power.

There would be additional latency --thunderbolt will add 2-5 microseconds each way vs direct PCIE connection, so double that for roundrip.

Internal crossbars behaved just like regular PCIE switches, so no extra risk here, PCI switches are a solved problem.

Any retimers in the cable itself will add under 1 microsecond, and I’d argue a short cable without retimiers shall be used.

On the software side – thunderbolt controllers often limit maximum payload size to under 512B, so some performance loss is expected, but irrelevant with rotational drives. Thunderbolt also does not allow peer to peer DMA – but that is not used unless you use NVME disks.

The only issue that I can think of that may become an issue with old HBAs is hot plug: thunderbolt initialization emulates PCIE hot plug. Old HBA have no idea how to deal with this. So, use new ones, which have modern pcie frontend.

Having written this, I very much feel this is a horrible idea. Let’s scrap it, and wait for an actual low power ARM hardware.

This appears to be a dead end: parallels does not seem to allow pass through of a random pcie device: KB Parallels: Firewire/Thunderbolt device is not working in Parallels Desktop

1 Like

Careful, that article refers to a edition of parallels from some 4 years ago

There is this series of posts from 2023 suggesting that external thunderbolt-connected storage is supported (but likely only as a DAS, not HBA). It links back to the 2021 article you referenced as well.

Sadly, I fear you’re right though. A HBA running on thunderbolt is unlikely to be “seen” or usable by parallels.

I just purchased itx+. Can you tell me if I can use truenas as the main way to manage my disks? There may be some serious problems and you should use the same omw.

I’m so excited this is a thing. I don’t know how I missed it six months ago.

I’ve got an Ampere box I built (80 Cores, 3GHz, 128GB of RAM, bunch-o-drives, 100G NIC, etc.) and I’ve been testing instances of MikroTik’s CHR on it for compute-intensive routing (functions that L3HW offload can’t handle).

I’ve also got an RDS2216 that I’ve been experimenting with. RouterOS’s ROSE package is nice, but their management interface leaves a bit to be desired. And they don’t have ZFS.

Today’s experiment (after getting a TrueNAS VM up and running Proxmox, also “experimental,” on my Ampere board) was to have the RDS2216 export its drives to TrueNAS via nvme-over-tcp, then use TrueNAS to turn those drives into VDEVs, along with the onboard SATA and NVMe drives. All in all, the TrueNAS VM has 24 disks available to carve into VDEVs. And it all works pretty good.

My Mac Studio has a 40G NIC (via Thunderbolt) connected to the CRS520 switch. The best results I’ve gotten are ~18Gbps via NFS running on the RDS2216 to the Mac and other hosts. The next best results are SMB & NFS to/from the TrueNAS VM, where I get 8-10Gbps read and write (hitting an unknown bottleneck, possibly in virtualization).

The testing is a combination of BlackMagic Design’s disk speed test, as well as ‘fio’ from the command line, testing 4K, 32K, 64K, and 1M blocks to a 4GB file. With 32GB of RAM, the reads are almost always from memory, so they all look the same, regardless of the drives I’m hitting (internal or NVMEoF).

Next steps are testing throughput to my Intel Proxmox hosts, since this setup will eventually become the NFS share for my VM infrastructure.

2 Likes

Rockchip 3588 NAS board would be interesting if you had a bunch of drives on hand, or sold a kidney to buy them.

I installed TrueNAS 26.04 on an Orange P6 with 32GB RAM. Everything works fine, except it always downloads apps for AMD64 instead of ARM64. When I install via YAML and force the appropriate image, the apps launch and work. Is there any way to configure this system (26.04.0-MASTER-20251221) to download the appropriate images with good ARM support?

Custom apps would be the only way. TrueNAS on ARM is unsupported & Experimental

…as is TrueNAS 26.04 Nightly.

1 Like

Regarding the dreaded app incompatibility issue. I imagine a kernel driver or a filesystem override could be used to match “platform: linux/amd64” and patch it on the fly in RAM to “platform: linux/arm64”. I know there are ways to archive this either on linux or even via an UEFI driver, though I haven’t done this yet. Currently I am still awaiting the arrival of my NAS hardware. Could this be something you would want to look into @Joel0?

1 Like

I’ve been a massive fan of truenas for a long time and recently I had to move country. From NZ to Canada.

in the move I lost most of my hardware which hurt, but I kept 3 of my 4tb drives and my Mac mini m4. After a bit of work I discovered that this software runs smoothly using the UTM (virtualization) software. Which meant that a couple of cores dedicated to the truenas vm meant that I could have my full old setup while maintaining fair performance.

I then setup some jails and had a couple other servers running in the background and had full network access with its own dedicated IP address and everything worked a treat.!

definitely found better results on the arm version than the x86 version due to the work the Mac had to do to run it all.

Highly recommend this! And I’m really glad work has been going into this. Cheers!

1 Like

What drive bay are you using to handle the drives’ connectivity to the Mac? Personally I’ve been wary of USB-based JBOD assemblies due to random disconnects I’ve seen over the years, but modern USB4/Thunderbolt options paired with a small host like a Mac Mini (or similar) could prove useful as a low-cost backup/offsite option to larger boxes.

USB and Thunderbolt is in the Unsupported category. If it works, that’s great, but if you have problems it doesn’t help. Trouble tickets will get closed just for having the storage attached those ways, most likely..

And, at present, so is running an ARM64 build as a VM on a Mac Mini. We’re way past “officially” supported–bordering on mentally insane at this point. And yet, it’s fun to see what can be done. (Jurassic Park vibes….)

I presently have two beefy hosts located in two different data centers doing replication to each other. A third replication target that could be built using lower-cost or on-hand hardware (that’s probably also doing something else, like running a locally-hosted website or even an LLM, in the case of M-series Macs) to be storage device number 3 (in a 3-2-1 backup config) is right up the alley for most DIYers.

Unfortunately they were USB, but I bought a separate USB C to USB A cable for each drive. Which tbh, was much more consistent than a hub. But I didn’t really run into any issues.. Once it was running it worked without a hitch, and whether that was happy luck or something you can count on, I’m not really sure.

This would actually be really fantastic as a container on mikrotik’s RDS server. 16 core ARM64 box (AL73400) with 32GB of RAM, gobs of connectivity, and 20 SSD bays.

I got it loaded as a VM inside a Qemu container on the RDS, and performance was pretty poor. I was able to map the drives through to it, but the networking was goofy (2-3 bridges), so throughput was a measly 2-3Gbps IIRC. If I knew better how to get Qemu’s networking to pass through, I might be able to improve that.

as a ‘container’ under mikrotik or as an app? Building an app gives you better drvice mapping options until at least routeros 7.22 maybe 7.23. Seems they add features to apps first, then maybe backport them to ‘bare’ containers.

I did both. The bridging under /app was a little better/cleaner, and based on environment variables, you can get the qemu-arm64 container to use qemu/passt to pass through what the container was getting from the bridge, but it wasn’t very reliable. If/when it did get connected, TrueNAS worked fine.

is qemu-arm64 emulating on the RDS? I don’t have an RDS or CCR2116 on my bench. Arm64 binaries should run native on there.

It runs natively.