Beelink Me Mini drives disconnecting

I try to use Beelink ME mini as a NAS.
Installed Truenas Scale 25.04.2.1.
The ME mini is connected to 10gbe switch.

1/ NVME_IOCTL_ADMIN_CMD: Input/output error.
First i tryed to use it with 3 WD_BLACK 2TB SN850 NVMe (model : WDS200T1X0E-00AFY0).
Then i bought 3 Crucial T500 SSD (model : CT4000T500SSD3).
Now i have a 2 WD_BLACK (slots 2, 3) and 3 Crucial (slots 4,5,6) installed.
My pool is composed of 3 Crucial + 1 WD_BLACK.

Have this error with all my WD_BLACK M.2s :
Controller failed: NVME_IOCTL_ADMIN_CMD: Input/output error.
Device: /dev/nvme0n1, failed to read NVMe SMART/Health Information.

SSD disconnected, pool degraded, after reboot i could repair pool replacing failed drive with other drive (resilver, scrub).
But then other drive diconnects the same way in an other slot some time later.

Also i see this in sudo cat /var/log/messages :
Aug 13 01:27:06 mini kernel: nvme nvme1: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
Aug 13 01:27:06 mini kernel: nvme nvme1: Does your device have a faulty power saving mode enabled?
Aug 13 01:27:06 mini kernel: nvme nvme1: Try “nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off” and report a bug
Aug 13 01:27:06 mini kernel: nvme nvme1: Disabling device after reset failure: -19

2/ Tryed 5gbe USB wavelink ethernet adapter
It disconnected after few minutes/hours of testing.

3/ Tryed 10gbe Ethernet card via M.2 adater in slot 1
It disconnected after few tests with iperf3.

To test disks i disconnected ethernet 5/10gbe adapters, so the problem appears without anything connected (anything other than NVME themselves).
CPU and Disks go to 60-80 degrees celcius quickly when there is activity.
I opened the ME Mini to see if fan is working, it works but does not seem to do much.
I do not hear it at all.

Looks like my WD_BLACK drives are removed all the time few minutes after stop / wait 5 min / restart.
Perhaps they use too much power and are not properly dissipate heat ?

Perhaps the system is overheating quickly when there is activity ?
Is there bios settings or something to boost the fan ?

Help please :slight_smile:

Note :
Also posted in Beelink forum, here :

Leading theory is the device is literally underpowered

Hi, I’ve had similar issues with five Samsung 970 Evo Plus 1 TB drives. I’m not sure if it’s due to power consumption, heat, or a combination of both.

The drives you’re using have DRAM cache, like my 970 Evo Plus SSDs, and likely consume more power and generate more heat. The power supply is only 48 W.

Beelink seems to have teamed up with Crucial and is pushing the P3 series, which uses less power, has no DRAM chips, and instead utilises host memory.

I specifically wanted DRAM-based drives because the Beelink only has 12 GB of RAM, and ZFS will use as much of it as possible.

There’s a fan setting in the BIOS, under (I think) Hardware Monitoring. It’s set to Smart Control by default. I changed mine to Full On, it made a big difference. But noisier.

In the end, I settled on the following setup:

Bay 4 (System TN Scale 25.04.2.1): 1 × Kingston OM8PGP41024Q-A0 (1 TB)

RAIDZ1: 3 × Samsung 970 EVO Plus (1 TB each)

Stripe: 2 × Timetec MS12 (2 TB each)

I’ve also installed flat copper heatsinks on the outer surface of the drives. Fits without issue


The Timetec’s have their dram chips on the underside so I’ve used these graphene heatsinks in the interim. I will get thicker thermal pads to fill the gap and the use the copper like the others.

The following are not under load…

CPU Temps so far -

Kingston system drive Temp:-

Samsung 970 Evo Plus Temps all 3 are about the same:-

Timetec Temps both are the same:-

I will be transferring some large files over and will see what the CPU and drive temps max out at. Will post those later.

The system is running stable so far with
Uptime: 14 hours 25 minutes as of 14:08

1 Like

Thank you, i found this command in the other thread and executed yesturday :

midclt call system.advanced.update '{"kernel_extra_options": "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off"}'

It did not crash yet, i will put it under load, and see how it goes today.

Note :
Looks stable from yesturday evening to today 10:29 - i mark this as solution, and will unmark it if i have a crash later.
BUT : this command line is a temporary fix, it will probably crash after next Truenas update, so i will have to relaunch command after each Truenas update until fixed in truenas/linux.
So this probably should be ascalated to Truenas team to be fixed, i don’t know how to do this.