Using Beelink ME Mini with 6 NVME drives, only 4 are useable in TrueNAS Scale

Here are my 2 cents to the issue…

Running 25.04.2.4 with 6x Samsung SSD 990 EVO Plus 4TB in raidz1 in Beelink ME Mini.

I am seeing drives going offline during scrub and later the whole device going offline during scrub. Scrub did not finish, 8TiB of data.

What I have tried and didn’t fix the issue:

  • Adding “midclt call system.advanced.update ‘{“kernel_extra_options”: “nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off”}’`”
  • Upgrading to BIOS 307 and disabling WIFI card and Bluetooth (trying to save power consumption)

What I tried that did fix the issue (i.e. finally finish scrub):

  • Open the case and let the inside i.e. PSU, nvme, fan, heatsink exposed

Power draw around 30ish Watts during scrub and PSU is too hot to touch. Previously scrub did finish but temperature has risen recently where I live. I am suspecting the PSU is degrading as I can hear buzzing noise during peak power draw.

In my case, I believe the design with hot PSU inside a enclosed plastic casing is rising the temperature to a point that it’s triggering PSU high temp protection (shutting down) and noise/ripple in the voltage (drives going offline).

I am planning to rig the device to use a external 12V PSU and keep the heat away.

Have you tried setting the NVMe power mode for all SSDs to 2?

For my SSD state 0-2 have the same Max Power, so it wouldn’t have an effect.
Given my power meter reading is only ~30ish Watts for the whole system during peak, power limit isn’t the main issue but temperature.
It’s reasonable to believe since the 990 Evo+ is capable of PCIe Gen4x4/Gen5x2, at Gen3x1 it should consume less power.

ps 0 : mp:6.00W operational enlat:0 exlat:0 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:- active_power:-
active_power_workload:-
ps 1 : mp:6.00W operational enlat:1700 exlat:2100 rrt:1 rrl:1
rwt:1 rwl:1 idle_power:- active_power:-
active_power_workload:-
ps 2 : mp:6.00W operational enlat:3000 exlat:6500 rrt:2 rrl:2
rwt:2 rwl:2 idle_power:- active_power:-
active_power_workload:-
ps 3 : mp:0.0800W non-operational enlat:1600 exlat:4600 rrt:3 rrl:3
rwt:3 rwl:3 idle_power:- active_power:-
active_power_workload:-
ps 4 : mp:0.0070W non-operational enlat:1600 exlat:43000 rrt:4 rrl:4
rwt:4 rwl:4 idle_power:- active_power:-
active_power_workload:-

OK. Mine use less than a 3rd of state 0 when in state 2.

1 Like

I re-run the scrub task with the case closed again the system shut itself down again. CPU temp went to 87c and when I quickly open the case the temp on the heatsink still measures ~60c.
With the case removed, while only drawing ~32W the PSU is measuring 49C.

Let me know if any of you are seeing these temps or your system has much lower temp with 6x nvme (i.e. my PSU is degrading/degraded)

Try setting the fan mode in the bios to “Full On” so it runs 100% all the time. Then see if it remains stable with the lid on. Might just need to tune the fan settings later if that’s the case.

I tried adjusting the fan speed trigger point per @adaciuk’s comment, prior to putting the case back on. I’ve also adjust down the PL1 and PL2 to 10W.

The high temp on the PSU doesn’t sit right with me…

Received my Beelink ME Mini less than a week ago.

I know this forum is mainly for TrueNAS users, but since this place has the biggest discussion thread about this device, I figured it’s the best place to post.


Hardware setup

  • Beelink ME Mini
  • 6 × Samsung 990 Pro 2TB (configured as RAIDZ)
  • Proxmox installed on internal eMMC (I know this isn’t recommended, but I don’t think it’s the root of the problem)

Issue

The system consistently loses the ZFS pool under semi-heavy load (e.g., ~300 MB/s copy from an external SSD to the pool).
I can reproduce this every time.

When the issue occurs, several NVMe drives drop out one by one, and the pool is suspended.


What I’ve tried so far, but none of the following helped:

  • Kernel parameters:
    nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off
  • Disabled Suspend and Hibernate options (can’t really remember the full name of parameter rn) for PCI devices in BIOS
  • Disabled Fast Boot (well, this can probably help to correctly boot with all 6 SSDs, but eventually the system will lose drives or even reboot)
  • Forced fans to 100% speed

Observations:

  • Highest recorded SSD temperature during copy: 65 °C
  • CPU average load per core: ~70%

Log when the pool dies:

nvme nvme2: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
nvme nvme2: Does your device have a faulty power saving mode enabled?
nvme nvme2: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug
nvme 0000:05:00.0: Unable to change power state from D3cold to D0, device inaccessible
nvme nvme2: Disabling device after reset failure: -19
.....
same for nvme5, then nvme1, and so on...
.....
WARNING: Pool 'masterpool' has encountered an uncorrectable I/O failure and has been suspended.

Based on [foxl] comment (can’t include links, sorry) the issue may not be the PSU itself, but the 3.3v line.

I can’t confirm this yet, but I did notice odd 3.3v readings in the BIOS. I can’t include images as well, but on 3.3v line it says +4.171V:


What options do we realistically have here?
If this is software-based, maybe a BIOS update or firmware patch could eventually fix it.
But if it’s hardware-based (and not something simple like an underpowered internal PSU), then I honestly don’t know what the path forward is.

But I really don’t want to give up after investing >900e in SSDs :confused:

(Body it to similar workaround)

Your only options are buying Lexar SSDs to replace your Samsungs or choose another computer besides the ME Mini. That’s the way I see it anyway. Supposedly the ME Pro is coming out soon: Reddit - The heart of the internet

1 Like

Hi everyone,

I also bought the Beelink ME Mini about a week ago.
When I tried installing TrueNAS Scale 25.04.2.4 , I ran into an issue (BusyBox, device-mapper, failure to communicate with kernel device-mapper driver…). While looking for a solution, I found this forum thread.
After reading it, I decided to update my BIOS to the latest version (it was version M1V301).

Right now, my setup is:

  • TrueNAS 25.04.2.4 (installed on KINGSTON SA400M8240G)
  • Mirror pool: Samsung SSD 990 PRO 2TB + Samsung SSD 980 PRO 2TB

During a pool scrub, the temperature goes up to around 50°C, which I think is normal. This week everything works fine, the pool runs normally without any issues.
But it’s definitely a problem if using all 6 drives — looks like the cooling is not sufficient.
And I think one of the best options is to 3D print and install modkit with an 80mm fan on the bottom side. (like Beelink ME mini NAS Modkit on printables web-portal)

I have 6 NVME drives in my Beeline ME Mini. With the M1V305 BIOS, TrueNAS Scale CE 25.04.2.4, 2x Patriot M.2 P300 128GB (boot_pool, mirror) and 4x CT4000P310SSD8 (data_pool, RAIDz1). Doing a scrub on data_pool, none of the disks gets over 45 degrees C. The CPU is fairly cool as well, getting up to 60 degrees, but not higher. It seems stable to me.

1 Like

Update: Beelink ME minis manufactured after September 8, 2025 have a hardware fix.

Source: Reddit - The heart of the internet

3 Likes

Yes, if you contact support they will change your old unit for a new one (you have to send the old one first)

Sadly I really can’t sent it as I don’t have any other machine with 6 nvme slots to access my data

Support said that they changed some inductors to fix the 3.3V rail

Even after the 3.3V rail is fixed, can the 45W PSU really support six SSDs that have DRAM, not just steady state power, but transient spikes too?

For comparison, the Asustor FS6706T has a 65W PSU for six SSDs, albeit with a CPU that uses 4W more.

2 Likes

I reckon it will be more than “a few inductors” to meaningfully increase the power capacity of the PSU. Based on their reddit post, they allegedly only tested with older, slower NVME sticks, which also happen to consume less power.

If Beelink wants this product to work with sticks that use more than 4W, they’ll likely have to up the (external?) PSU capacity and perhaps that of the blower also.

Like others, I stumbled upon this thread as I was conceptually selecting components. Other than a thank you, I wanted to highlighting that Ian Morrison was able to get an SD card reader working in the Wi-Fi M.2 slot per his articale on liliputing (sorry, I’m not allowed to post a link).

I am not smart enough to understand if this means other protocols may be supported in addition to CNVio2 since PluckMy confirmed a regular 1x PCIe lane is not there.

Finally, I am still personally debating about picking up a 12gb RAM + 64gb eMMC N150 model. I believe I can construct a functional TrueNAS Scale build with the following hardware while respecting the 45 watt limit:

Component Watts Running Total Watts Column 4
HDD: 6x 4TB WD-SN7100 @ 4.5w ea. 27 27.0
NIC: 2x I226-V @ 1.3w ea. 2.6 29.6
eMMC 1 30.6
CPU* 10* 40.6
RAM 2 42.6
Wi-Fi Unplugged 0 42.6

Thank you very useful link, i will try replacement.

I have five Crucial T500 4to SSD (model : CT4000T500SSD3) and one WD_BLACK 2TB SN850 NVMe (model : WDS200T1X0E-00AFY0, in slot 4).

The T500 disconnects randomly.

These

Would make me stay away from them for use in this particular system.

1 Like

System Setup / Context

  • Model: Beelink ME Mini (12GB version)
  • BIOS: M1V305
  • OS: TrueNAS 25.04.2.4 (ZFS setup)

Test 1 Configuration

  • Drives:
    • 5 × Samsung 990 EVO Plus 4TB NVMe (ZFS pool, RAIDz1)
    • 1 × Crucial P310 500GB (OS; slot 4)
  • BIOS Settings:
    • Fast Boot → Disabled
    • WiFi → Disabled
    • ACPI Power Management → Disabled
  • Kernel Params:
    midclt call system.advanced.update '{"kernel_extra_options": "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off"}'
  • Result: Uptime ~2 days (idle). As soon as I started a heavy write workload → instant crash.
    Pool degraded, 2 drives disappeared.
    Kernel log shows NVMe controllers dropping with CSTS=0xffffffff and power state errors.

Test 2 Configuration

  • Drives:
    • 4 × Samsung 990 EVO Plus 4TB NVMe (ZFS pool, RAIDz1)
    • 1 × Crucial P310 500GB (OS; slot 4)
  • Same BIOS and kernel params as above.
  • Result: Crashed within ~10 minutes of starting write-heavy workload or ZFS scrub.
    Again, 2 drives vanished mid-operation with the same kernel panic logs.

Kernel Log Excerpts

[   95.272173] nvme nvme2: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
[   95.272266] nvme nvme2: Does your device have a faulty power saving mode enabled?
[   95.272309] nvme nvme2: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug
[   95.332187] nvme 0000:05:00.0: Unable to change power state from D3cold to D0, device inaccessible
[   95.332800] nvme nvme2: Disabling device after reset failure: -19

Suspected Root Cause

After digging around and testing different configurations, it looks like a hardware power issue — specifically the 3.3V rail sagging under load.
A few others (like reddit user justanother1username) have reported voltage regulator module issues with this exact model — the 3.3V rail can’t handle NVMe load spikes, especially when multiple drives are active.

Even though these NVMe drives (Samsung 990 EVO Plus) are DRAM-less and shouldn’t be heavy power consumers, the board’s voltage regulation is so poor that sustained writes or scrubs cause NVMe controllers to drop off the PCIe bus entirely.


Overall Impression

Absolutely disappointed with Beelink.
They clearly did zero proper validation for multi-NVMe configurations under real-world load.
This system cannot handle sustained writes, making it totally unreliable for NAS or ZFS use the very thing it’s marketed for.

I wanted a compact all-NVMe NAS build, but instead got a board that can’t even keep the drives powered under stress.
At this point, I honestly want my money back.

2 Likes

Looks like someone also had issues with Lexar 4TB NM790.
/r/BeelinkOfficial/comments/1n81vcq/comment/ncm9v50/ (can’t paste the entire Reddit URL; please prepend)