TerraMaster F8 SSD Plus - TrueNAS Install Log

I attempted and succeeded in installing TrueNAS on my TerraMaster F8 SSD Plus. This post details my journey and will hopefully help others that endeavour to do the same! I have never used TrueNAS, so my experience is that of a complete beginner to this software.

Many reviews of the TerraMaster F8 SSD (and Plus variant) mention its TrueNAS compatibility. Yet, none show it actually running the software. I am writing this to rectify this situation and provide evidence of it actually running TrueNAS and confidence (or lack thereof) to those researching compatibility with this appliance.

Summary

TrueNAS SCALE: Flaky and broken
TrueNAS CORE: Works with caveats
Caveats with CORE: The built-in Aquantia AQC113 ethernet device is unsupported

Hardware

The NAS is a “TerraMaster F8 SSD Plus” with all 8 bays populated. Seven are 4TB Crucial P3 and P3 Plus’s for the storage pool, and one is a 250G Kingston NV2 for the boot pool.

TerraMaster OS

Before attempting any TrueNAS installs, I let the first-party “TOS” installer do its thing. I was not aware if it performed any hardware initialisation, such as firmware updates or BIOS setup. I felt it was wisest to let it run and complete a setup, both to ensure any hardware setup was done and to validate that all the hardware was working properly in a “known-good” environment. I verified everything was in good working order. I then removed the internally-mounted TOS USB, wiped the drives with ShredOS, and proceeded with TrueNAS setup.

Game Start: TrueNAS SCALE
Scale appears to be the up-and-coming preferred variant of TrueNAS. It is placed more prominently on the TrueNAS website, and it is labeled as “best for new deployments.” It was the obvious choice. I downloaded TrueNAS-SCALE-24.04.2.2.iso as the recent stable release and got to installing.

To get the USB to attempt booting, I needed to change these BIOS settings: (press ESC at the TerraMaster logo / American Megatrends logo)

  • Security → Secure Boot → Secure Boot → Disabled
  • Boot → TOS Boot First → Disabled

While I was there I also tweaked a couple other non-required options:

  • Boot → timeout 5s
  • Boot → quiet boot off
  • Advanced → Power Management Features → restore from ac power loss → last state
  • Advanced → CPU Configuration → BIST → Enabled

I saved this as User Defaults as a reasonable checkpoint of settings that I will always want, without too many tweaks. I chose Save and Reset and upon reboot, the plugged-in TrueNAS SCALE installer USB was selected automatically and proceeded with booting.

SCALE-UP your problems too!

Once the installer was past GRUB, I was unfortunately greeted with total failure! rcu_preempt was detecting CPU stalls, and most of the time a kernel panic was being logged: Kernel panic - not syncing: Timeout: Not all CPUs entered broadcast exception handler. In all cases, the NAS would reboot and try again.

With some amateur researching and guesswork, I managed to find a set of BIOS options and kernel command-line options that seemed to get the installer to boot most of the time. Each time I found a tweak, it would get the boot process moving a bit further, then it would crash with some other issue. I have links in my notes to sources for all of these, but I’m unable to share them due to my forum trust level. Here are some other errors I encountered:

  • nvme pcie bus error severity=corrected type=inaccessible
  • nvme unable to change power state from d3cold to d0, device inaccessible

Eventually I got it to the install UI, and this is what I needed to tweak:

BIOS tweaks:

  • Advanced → CPU Configuration → AP threads Idle Manner → HALT Loop
  • Advanced → CPU Configuration → MachineCheck → Disabled
  • Advanced → CPU Configuration → MonitorMWait → Disabled

Command-line options:

  • noapic - Necessary to bypass rcu timeout
  • pcie_aspm=off - Lets system continue even with bus errors, seems necessary
  • nvme_core.default_ps_max_latency_us=0 - Recommended by boot log warnings
  • pci=nocrs - Recommended by boot log warnings

I also found a few other command-line options that were offered by the community, but they seemed to have no effect:

  • pcie_port_pm=off
  • processor.max_cstate=0 intel_idle.max_cstate=0 idle=poll

The overall theme of these tweaks is to disable various power management features and subsystems. I hypothesize that there is an incompatibility with the PCIe device tree that is on this NAS’s special-purpose motherboard and the Linux drivers responsible for managing PCIe connections. I think the CPU stalls were a result of driver bugs in the PCIe subsystem rather than an issue with CPU power management, as indicated to me by the lack of effect of the c_state cmdline options and the lack of effect of any CPU power management options in the BIOS, except for the HALT Loop choice.

Who needs all those drives anyways?

Once I was finally at the setup UI, I proceeded and found that only four of my eight NVMe drives were appearing. At this point, I gave up tweaking. I collected a dmesg log and tried a final hail-mary approach of trying other SCALE versions: TrueNAS-SCALE-23.10.2.iso and TrueNAS-SCALE-24.10-BETA.1.iso, neither of which proved any better even with all the tweaked options.

I decided to be adventurous and go with the other TrueNAS offering, TrueNAS CORE.

Attempt Two: CORE
Pokemon Red vs Blue?

With the latest recommended TrueNAS-13.0-U6.2.iso now downloaded and installed to my USB, I was hopeful for a different outcome. I was very relieved to see that the TrueNAS CORE installer booted without issue, even after restoring my BIOS tweaks to my User Defaults set earlier (no HALT Loop needed!). There were no PCIe bus errors in the logs, no stalled CPUs, and no missing drives in the install UI. I installed it to my boot pool of one drive and upon restart and removal of the installation medium, it actually booted the installed OS!!! I was shocked and grateful that I would not have to settle for that sus TerraMaster OS.

My next task was to start playing with it and getting everything set up to hold actual data. However, I encountered my next problem: there was no network connection. The console menu showed zero interfaces available. Huh?? After some research, I found that the Aquantia 10GbE AQC113 in this NAS is not supported by the installed Aquantia driver. I very much would like to use this, and I did find some community upgrades to the driver that might support AQC113, but it requires compilation of a custom kernel module to attempt and I just wanted things to work good-enough. I had a spare 1GbE USB adapter so I plugged that in and got it connected.

Once I was finally in the web interface, setup proceeded as I expected from my preparation reading the documentation. I got a pool setup, configured Samba access, and even set up its own mailbox so it can yell at me when things break. I could easily saturate the 1GbE adapter with inbound rsync transfers from my previous storage pool with barely a sweat from the CPU (~8% utilisation).

I do still have one lingering issue, any time I start a Cloud Sync task to back up my storage pool, the network adapter hangs and cuts off connectivity until I unplug and relpug it. I’m chalking this down to the realities of using a $15 USB ethernet adapter rather than it being TrueNAS’s fault, so for now I’m without a 3rd backup location, although I hope to try Duplicati via its plugin sometime soon, and if all other on-device solutions fail, I can easily use a 2nd node to handle cloud backup.

Final Thoughts

I would have preferred to use TrueNAS SCALE due to my familiarity with Linux, however I have found that BSD-based TrueNAS CORE is competent and well-suited for this task. I will likely come back to SCALE in a few years to try again and see if there is a resolution to those boot issues, but if not I will be happy to stay on CORE through to this server’s decommission time.

4 Likes

My forum trust level has been upgraded, so I can now post the links from my notes and the dmesg logs I took.

I am not seeking a review of any of this, I intend it to be informational and potentially help anyone else searching for the issues I encountered. But, if anyone does have relevant knowledge, I’d love to discuss!

Kernel command-line tweak references:

dmesg logs:

  • 2024-09-20_tos_dmesg.txt (92.7 KB)
    • TerraMaster OS logs. It is Linux-based and had no issues booting and communicating with all drives.
  • 2024-09-20_truenas_setup_dmesg.txt (77.7 KB)
    • TrueNAS SCALE installation USB logs. This is with noapic and pcie_aspm=off and all BIOS tweaks. I wish I took more log dumps, this is the only one I have despite my note-to-self that I had taken a log copy with all tweaks active.
2 Likes

Thank you for all of the detailed information. I actually just read this review of the TerraMaster F8 this morning with great interest thinking that maybe it is possible to run TrueNAS SCALE on it.

The reviewer mentions running TrueNAS without detail. I’ve asked for it in the comments and linked back to this post.

Disappointing about SCALE. I wouldn’t run CORE.

2 Likes

I’m glad my post was able to advise!

I too am interested in knowing the details from that review. I would be glad to learn of a solution that allows SCALE to run on this hardware, although I am equally expecting that review to have no source or solution. Will you update us if you get a reply from the reviewer?

Lastly, I agree with your decision to not run CORE. It is working for me but only just. As an example, I was disappointed to learn that Plugins are now defunct yet still have a prominent placement in documentation and the web interface. That is only one of several annoyances that have shown me CORE is not the ideal choice.

I’m a bit concerned about the slow performance issues that people have been posting about for a while with TrueNAS SCALE. Whereas the review of the F8’s performance that I watched showed no problem saturating a 10 GbE link.

I would much rather run ZFS and TrueNAS SCALE but the TOS 6 OS seems decent. I don’t like it as much as TrueNAS and I really prefer ZFS (having used it when I worked at Sun) but having to fall back to TOS 6 might be acceptable.

I’m not ready to jump yet. I’m going wait and see what else people discover with it. I was considering the TrueNAS Mini X+ until I saw the 45 HomeLab HL8 (not out yet). But in the back of my mind I kept thinking that I should be buying flash instead of HDDs. I worked at Pure Storage for several years. Pure makes flash only arrays for Enterprise. Too expensive and power hungry for home, but it’s really hard to consider going back to HDD. :-/ This is the first consumer all-flash system that I really like with low power consumption.

1 Like

I was able to get cloud sync tasks operational by swapping the substitute network adapter. I was previously using one with a Realtek RTL8153 (TP-Link UE300). I’ve now swapped to one with a ASIX AX88179 chipset (Plugable USB3-E1000). The ASIX-based adapter has been working, although unfortunately at lower than link speed. For now, this is fine with me.

I tested TOS and it did seem capable! I would have accepted using it had I not been able to get TrueNAS working. However, I did not like how it split up the drives (with part of one of the storage-array disks also taken up by the OS), and I doubt it would be easy to import the array in another OS should the need arise.

If the ASUSTOR FLASHTOR Gen 2 had released before the TerraMaster F8 SSD, I think I would have chosen the Flashtor. It seems like another great choice for all-flash storage.

Thank you very much for posting your experience with the F8 SSD Plus and the TrueNAS versions. I was about to pull the trigger on the F8 and go the TrueNAS route, but this does not look good.

I wonder if all is lost or if TrueNAS has a history of expanding Hardware Support. I was so dialed in into TrueNAS Core after I did some testing in the last weeks, that I don’t want to let it go. And as far as I can see there is currently nothing on the market like the Terramaster F8 SSD Plus. The Asus one is not there yet and until it is released and someone tried, it is unclear if TrueNAS Scale will work with it.

Just when I thought I found my combo, I realize I did not. Thank you very much for trying, sharing your experience with the community and I hope you were able to send it back or have good use for it. That must have been quite an investment.

2 Likes

I’m happy to make a sacrifice for the community :slight_smile: . It was a large investment but I knew the risks. Someone has to be the first! I will be keeping my unit with TrueNAS CORE running on it, as it meets my needs even with the caveats.

1 Like

Appreciate the post. I just ordered f8 ssd plus after hearing 2 youtube reviews claiming that custom OS like turenas can be installed.
Left comments asking for basis of the claims. May need to return the unit if there is no supporting evidence.

1 Like

I’d be happy to read if someone responded. I also had a brief look over at the Openmedia Vault Forums, but could not find someone yet to have tried installation on the F8. Maybe people claiming it was done shot too early. I also thought it should be no problem, but as we see now, you never know until someone actually tries.

Hey folks, author of this video replied to my comment with an ask to link this thread. Somehow I cannot respond - YouTube seemingly accepts my comment but it does not show up. Can others respond there please? You may have better luck than me.

The YouTuber can probably see your comment and might need to approve it if it has a link in it.

1 Like

An interesting review by STH:

I would not expect hardware support to expand on CORE, which is working. As for SCALE, which is not working, I’m not confident that ACQ113 would get better support, and the suspected issues with power management in the PCIe subsystem could only be fixed by chance improvements from upstream.
It would be interesting to test OMV to see whether the same issues arise.

1 Like

Looks like they selected the same heatsink k did

I can understand why they don’t include the metal clips.

Alder lake graphics. Neat. Should support SR-IOV for VM sharing eventually.

Wish it supported ECC. I thought i3s did.

The next gen ASUS FlashStore 12 bay supports ECC, but not an iGPU. sigh.

Coffee Lake i3 CPUs (and earlier) support ECC.
For Alder Lake and later, ECC is supported on i5 and above, not i3.

Intel is especially qualified to confuse cats its customers.

1 Like


Fwiw tried latest RC, observed same issues.

So strong evidence is that TrueNas Scale and F8 SSD plus hardware are incompatible.

(*) Upd: For reference, TOS reports “uname -a” as
Linux TNAS 6.1.58+ #208 SMP PREEMPT_DYNAMIC Sun Sep 29 09:49:15 CST 2024 x86_64 x86_64 x86_64 GNU/Linux

I wish I found this thread before ordering my F8 Plus.
But I didn’t so here we are - after replicating the same issue everyone else is having with TrueNAS, I decided to try some more debugging

TL;DR - This works as a 5-bay NVMe NAS but not as an 8-bay NVMe NAS

More detailed explanation:

The four ports on the “RAM” side of the board (NVMe_1, NVMe_2, NVMe_3, NVMe_4) are all connected to an ASM2806 PCIe switch

The four ports on the “CPU” side of the board (NVMe_5, NVMe_6, NVMe_7, NVMe_8) are all connected directly to N305 PCIe controllers

For some reason the combination of Linux and ZFS has an issue with multiple drives on the ASM2806. One drive connected to the switch works just fine, but two or more and you get PCIe errors and all sorts of other trouble.

I had no issues installing TrueNAS and creating pools as long as I only had one drive in slots NVMe_1, NVMe_2, NVMe_3, NVMe_4 and left the other three empty. It actually didn’t matter which slot I used - as long as other three were empty, things worked fine.

I am using the default BIOS settings and have not added any kernel boot options/flags - I do get the occasional nvme pcie bus error severity=corrected messages popping up in the log, but it’s not continuous spam. More like once every 2-3min

I’m going to play around a bit more with the hardware and research the status of ASM2806 support in Linux - the fact that it works under TrueNAS Core but not under Scale means the hardware should be fine and it’s more of a kernel/OS/configuration issue.

FWIW, AQC113C ethernet is recognized just fine with TrueNAS Scale 24.04 / Linux 6.6 though I’ve only tested it working at 1GbE - I don’t have any faster switches to test with

4 Likes

Great job narrowing it down to the PCIe switch!
As if drip-feeding 8 M.2 drives and NIC on a 9-lane SoC wasn’t bad enough, the assymetrical design with 4 drives on direct x1 link and 4 other sharing x2 (?) is bound to introduce oddities.

Fair - I mean for a system like this, my motivation is small, quiet, and low-power - these days with QLC NVMe drives having sustained write speeds comparable to spinning disks, even 4 drives split across 2x PCIe3 lanes isn’t really a bottleneck

I tinkered around a bit more and the mystery deepens.
In my latest test, I put two blank drives onto the switch (NVMe_1 and NVMe_2) and left the other three direct-connected.

Immediately during boot I started getting the pcie bus errors spam - adding “pcie_aspm=off” kernel parameter made that go away.

Then I just created a vanilla ext4 filesystem on each drive, mounted them both, and tried some basic operations (eg: copying a couple GB of files to each) - just worked. I could even copy between the two without issue.

Pushing my luck a bit further, I wiped the individual filesystems and instead set up an mdadm raid1 mirror across the drives - that also worked just fine, got through the resync (only 200GB, my NVMe drives are rather small)

So now I’m suspecting there is something very specific with how ZFS interacts with drives behind the ASM2806 PCIe switch. Which is odd because at some level block devices are block devices.

Will keep experimenting with various combinations of block devices and ZFS - perhaps putting some loop devices into the mix and seeing if ZFS works on those.

Very much out of my depth here, but at the very least maybe we can push towards a high quality bug report

2 Likes

Oh - one more thing - I did try OpenMediaVault as well as vanilla Ubuntu Server LTS 24.04 - all of them exhibit the exact same behavior - zfs barfs when more than one NVMe device is behind the PCIe switch, but works just fine in the 5-drive configuration.

Also multiple drives behind the PCIe switch work just fine when not using zfs

This is very much a linux kernel issue, and I don’t think there’s anything special in TOS either - TOS just isn’t impacted because it’s using LVM/mdraid and friends, not ZFS

One thing I can’t quite reconcile my findings with is this blog post: How I installed TrueNAS on my new ASUSTOR NAS | Jeff Geerling

The Flashstor 12 Pro also has an ASM2806 and apparently Jeff was able to get TrueNAS working just fine. Of course it’s unclear how many NVMe drives he installed and in what ports, so it is possible he also just avoided this issue by chance.