Using Beelink ME Mini with 6 NVME drives, only 4 are useable in TrueNAS Scale

Hello,
I wanted to build a small travel-friendly system using the new Beelink ME mini. As my drives, I got five Seagate FireCuda 530R (meant for the storage pool with RAIDZ1) and one WD Red SN700 (OS installation disk).
Even though TrueNAS Scale Version 25.04.1 recognized all drives in die Disk Overview, I’m only able to create a storage pool with 3 of the Seagate drives instead of five. Using the command “nvme list” shows only four drives (including the OS drive). I had already done some searches and found people with similar problems on other hardware / devices. The recommendation to disable VT-D, but it didn’t fix anything.
One additional step I already took: Checked using the included Windows Installation on the 64 GB eMMC storage with the result that it can detect all 6 drives with them being all accessible from within Windows.

Do you have any idea what I can do for troubleshooting?

What model? Do you have a link?

beelink-me-mini-n150 shows preorder and has a note in an image

You found the correct model!
Even though it shows preorder, mine arrived 20 days after ordering it.

My current further steps in troubleshooting have been updating the bios version from M1V303 to M1V305. But no change.

// Update:
// BE AWARE:
After doing some tests, this method failed after ~4 hours, making the pool fail, reboot fixed it. Better wait for a TureNAS release with newer kernel

Found the solution (thanks to Ai). Beelink uses the ASM2824 controller. The current Linux Kernel of TureNAS has an upstream bug in the pcie driver that prevents the first (or occasionally the fifth or sixth) downstream port on that switch from completing link-training, so any NVMe devices on those lanes never appear to the OS.
Ai asked me to edit GRUB and add this at the end line of “linux”: nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off. Trying this fixed my problems.
I then added this permanently by using the shell with this command:
midclt call system.advanced.update '{"kernel_extra_options": "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off"}'.
After this, I could check if was successfully added by this command midclt call system.advanced.config | jq .kernel_extra_options.

PS: Rolling back this fix should be possible using midclt call system.advanced.update '{"kernel_extra_options": ""}'

1 Like

I don’t know if your method with survive system updates. I am hoping others will comment.

The method sadly wasn’t a permanent fix. The pool still failed a while later.

Hello, new poster here. Signed up to respond in this thread.

Another discussion thread on Reddit, using TrueNAS, also ran into issues until a poster pointed out that the built-in power supply has a rated output of 45W – and sticking 6 SSD sticks and the mobo against a 45W PSU without thinking through power needs could cause lots of turbulent instability that has nothing to do with TrueNAS.

The OP on reddit replaced his SSDs of choice with Lexar low-power SSDs and got everything working ok.

What SSDs are you using and are you near or at the PSU 45W limit?

1 Like

OP is using the Seagate FireCuda 530R which according to its dataset consume between 8-9W (8.3 to 8.9W, to be precise) on average during activity. With 6 SSDs at 8W each, the 48W from the SSDs alone would be above the rated 45W output - and that’s not accounting for anything else in the system. Sorry - 5 530R’s is 40W, and then there’s the WD Red SN700 and anything else. Still, those are “average” values - IIRC the M.2 socket is capable of delivering more.

Can you link to the bug/kernel discussion in question here? I see the discussion on the mailing list from 2023/2024: LKML: "Maciej W. Rozycki": [PATCH v9 00/14] pci: Work around ASMedia ASM2824 PCIe link training failures

The product datasheet for the WD Red SN700 has peak power draw at 2.8A; Watts = Amps x Volts, and if 5V = 14W. While that’s “peak” – add in the Mobo / Wifi / Fan / reserve for peaks and inefficiency – I wonder if OP is at or above the PSU limit of watt draw.

45W seems undersized for something that can house 6 SSDs. Looks like a design flaw to me, especially since there is no QVL for SSDs.

3 Likes

I have 5 x Samsung 990 Pro and 1 Samsung 960 Pro.
Drives keep disconnecting at random or fail to show up on reboot.

I removed the 45W internal PSU and hooked up a 90W external powerbrick. Still the same issue on reboot.

Scrubbing the pool gave me 1.5A @20V max so 30 Watts.

The root issue seems the be an animic 3.3V power rail.

2 Likes

@neilsf @HoneyBadger @nereith @foxl the PSU is definitely not the problem. I bought a TerraMaster F8 SSD Plus NAS and got the same issue. However, currently by disabling VT-d, it seems to crash less often than the Beelink device. I did have an issue with VT-d on first boot with the TerraMaster with TrueNAS Scale; after disabling it, I didn’t have any crashes (yet).

Because of this I went back to test more OSs with both machines and VT-d turned on with the intention to turn it off after a crash. I did have a JetKVM for checking while I’m not home connected to USB/HDMI as a tiny additional power-drawing component.

  • Windows 11 24H2 is working perfectly fine for a total of 6 hours on Beelink and 3 hours on TerraMaster. I did keep the CPU at 100% the whole time and the drives busy for a few hours (roughly 1 hour per device). VT-d on all the time.

Other systems are only idle time (Note: Scale did crash on Beelink even when idle).

  • TrueNAS Core 24.10.2.3 working perfectly fine for 4 hours (both devices). Sidenote: TerraMaster didn’t have an Ethernet connection.
  • OpenMediaVault 7.4.17 (6 hours, Beelink only), no problems, VT-d on all the time.
1 Like

Either CORE or 24.10.2.3, but definitely not both. So which one?

1 Like

Sorry. I meant TrueNAS CORE 13.0-U6.8. I mixed up the version numbers.

1 Like

I ordered one unit and also 6 Crucial P3 (PCIe 3, 3.5 Watts peak) 1 TB drives. Intend to try CORE and Zvault on it. Possibly the emerging FreeBSD based bhyve manager.

1 Like

Thanks for the additional information. Were any additional kernel parameters using in the testing to adjust PCIe power management, or all stock - and was VT-d enabled or disabled with TrueNAS CORE on these machines?

Re: the drive visibility, can you provide a link to the bug w.r.t the ASM2824, or is it the same one I linked to in post #7 above (Using Beelink ME Mini with 6 NVME drives, only 4 are useable in TrueNAS Scale - #7 by HoneyBadger)

Tried:
TrueNAS SCALE 25.04.1
OpenMediaVault 7.4.17 (amd64)
Unraid OS 7.1.4
ASPM Enabled/Disabled
VT-d Enabled/Disabled

Same issue across all setups: after about an hour, the RAID degrades due to one drive disconnecting.

Also attempted: Running TrueNAS inside Proxmox using PCIe passthrough. VM fails to start when NVMe drive is assigned. dmesg output:

nvme 0000:XY:00.0: Unable to change power state from D3cold to D0, device inaccessible

The DSTECH DS8570 chip on the mainboard can only handle 3A. Give me some time to inject more power, I will let you know of thats the fix.

1 Like

Update: I’ve added a LM 1084 IT3,3 and a POSC 470/4D-10 to the 3,3V power rail. Reinstalled everything and mounted my pool.

I was able to recover my files but the RAID degraded shortly after (one drive got missing)

Since I can’t return my DRAM nVMEs I have to look for another hardware platform.

I’ve installed TrueNAS 25.04 (since upgraded to 25.04.02) directly on the 64Gb mmc drive, no proxmox or PCIe pass-through shenanigans (tried and failed) and using all 6x NVMe slots for a RAIDZ2 pool - all seemed well at first.

But now drives nvme1n1 and nvme2n1 show as REMOVED. I swapped these drives in slots 2 & 3 with the ones in slots 4 & 5 and the problem followed the slots and not the drives.

I realise this is not exactly the same issue as the OP stated, but thought I would just add my experience with the Beelink ME Mini.

What kind of M.2 drives are you using and what is their power consumption?

I also ordered an ME Mini and 6 Crucial P3 which are supposedly one of the drives which are least power hungry.

But then I definitely will be installing a FreeBSD based system, not Linux.

I have 4x SAMSUNG MZVLB256HBHQ-000H1 (aka PM981a) 256Gb drives. They also have a HP part number as they were pulled from HP all-in-ones. Also got\had a WD SN720 and 1x SK hynix BC501 HFM256GDJTNG-8310A.

They are just what I happened to have lying around. When funds allow I will get 6x 2Tb or 4Tb drives.