Using Beelink ME Mini with 6 NVME drives, only 4 are useable in TrueNAS Scale

Hope that works. It should.

On paper these things sound awesome. Wish they were reliable with Linux… maybe a later revision will be :wink:

Looks like the Wifi card is an AX101NGW in M.2 2230 form factor, rather than mPCIE. The card uses a CNVio2 interface..

Do we know if that M.2 slot has PCIE lanes?

If the PCIE lanes are present, we can use an A+E-key → M-key adapter, or get a native A+E-key SSD like the Cervoz T425 or Advantech SQF-C3A 720.

2 Likes

Easy enough to find out if there is a usable PCI lane as the M2 A+E to M adapter was only $7 … I know most desktops have usable PCI lanes but a lot of laptops with CNV interfaces do not. I also noted that there is absolutely no room behind the existing wifi/bt card. I guess when the adapter shows up I will find out one way or there other.

– Update

Used an A+E to M Adapter and a WD_Black 2230 sn770m 1TB NVMe but no luck, BIOS did not show/recognize it. As it only cost me $7 it was worth the try.

1 Like

Here are some pics and temp readings posted in another Beelink thread

Hi! I notice you have Samsung ssd’s installed.

Which models are they and did you have any issues? Are they all in RAIDZ1?

Used to see that “pounding” of log files in an older version of Core, until I made the boot ssd the system drive and redirected all that thumping to a silent ssd. I think my drives were happier this way.

Samsung 970 Evo Plus use a bit more power than the Crucial P3 models. You can easily ask the system itself:

freenas# nvmecontrol power -l nvme1

Power States Supported: 5

 #   Max pwr  Enter Lat  Exit Lat RT RL WT WL Idle Pwr  Act Pwr Workloadd
--  --------  --------- --------- -- -- -- -- -------- -------- --
 0:  7.8000W    0.000ms   0.000ms  0  0  0  0  0.0000W  0.0000W 0
 1:  6.0000W    0.000ms   0.000ms  1  1  1  1  0.0000W  0.0000W 0
 2:  3.4000W    0.000ms   0.000ms  2  2  2  2  0.0000W  0.0000W 0
 3:  0.0700W*   0.210ms   1.200ms  3  3  3  3  0.0000W  0.0000W 0
 4:  0.0100W*   2.000ms   8.000ms  4  4  4  4  0.0000W  0.0000W 0

Can someone who “knows Linux” add the command for TrueNAS CE, please? :wink:

So that’s 7.8 vs. 5.5 W maximum. 7.8 x 6 is 46.8 - that might pose a problem to a 45 W power supply, even ignoring all other components. The Crucial P3 pull 33 W maximum in total, so there is some marging for the rest of the server.

HTH,
Patrick

3 Likes

Exactly!! I’m very new to both Linux and Truenas. I’d like to know what my 3 x 970 Evo plus’s and 2 x Timetec’s are sucking out of the PSU.

I believe you’re right. Looks to me it’s a combination of power consumption and heat. However, my system is now running more stable since indreasing fan speed and improving cooling. I will copy over about 500 gig of files and monitor CPU and drive temps.

It is becoming increasingly obvious the Beelink is more suited to ssd’s with lower power consumption.

EDIT:

Yeah! so I did :laughing: after a lot of messing around.
I used Putty tweaked the script leaving out states 1-4 for clarity and here is the output :point_down:

Max Power State for 6 drives
Drive           Max pwr    Enter Lat  Exit Lat   RT  RL  WT  WL  Active
----------      -------    ---------  --------   --  --  --  --  ------
/dev/nvme0n1    7.54W      0ms        0ms        0   0   0   0   *
/dev/nvme1n1    7.54W      0ms        0ms        0   0   0   0   *
/dev/nvme2n1    5.00W      0ms        0ms        0   0   0   0   *
/dev/nvme3n1    7.54W      0ms        0ms        0   0   0   0   *
/dev/nvme4n1    9.00W      0ms        0ms        0   0   0   0   *
/dev/nvme5n1    9.00W      0ms        0ms        0   0   0   0   *
root@truenas[~]#
970 Evo Plus - 7.54W x 3 = 22.62W 
Kingston     - 5.00W x 1 =  5.00W
Timetec      - 9.00W x 2 = 18.00W 

TOTAL - 45.62

:astonished: :point_up: That’s without the taking into consideration other components.
You were pretty much spot on :+1:

1 Like

Your command fixed NVME disconnects and 10gbe nic disconnects for me.

midclt call system.advanced.update '{"kernel_extra_options": "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off"}'

Now i have :
-10gbe nic via m.2 extension cable in slot 1 (maxes at 6.9gbps with iperf3) as default nic via DHCP + 2 2.5gbps with aliases (everything with MTU 9000)
-2 WD_BLACK 2TB SN850 NVMe (model : WDS200T1X0E-00AFY0) in slots 2,3
-3 Crucial T500 SSD (model : CT4000T500SSD3) in slots 4,5,6

After executing command and reboot i did not have M.2/ethernet disconnects for 24hours.
Just have to remeber to execute command again after Truenas update (truenas scale version 25.04.2.1 installed in eMMC).
The WD_BLACK(s) and the 10gbe nic disconnected all the time, now fixed.

Related threads in my case :

This thread is certainly very helpful for anyone that has already bought the me mini, or is considering buying it (I wouldn’t do it knowing what we now do).
Seeing that power consumption seems to be one of the reasons for the drives to disconnect, it may be that owners should aim to buy drives with low power consumption. As such, I found a post that has quite a few drives tested that also monitors average power consumption of the drives (the crucial p3 is certainly in the low range). Here is the link SSD Benchmarks Hierarchy 2025: We've tested over 100 different SSDs over the past few years, and here's how they stack up. | Tom's Hardware for anyone looking to max the me mini.

PS. It looks like we should avoid 4tb drives!

Next I am going to do is to get mmc-utils running on the device, probably by installing in a FreeBSD jail and then trying to use the installed binary from the host:

https://git.kernel.org/pub/scm/utils/mmc/mmc-utils.git/

I already monitor the long term health of all my SSDs - SATA and NVMe - with smartmontools. Let’s see if I can integrate the eMMC, too.

Kind regards,
Patrick

2 Likes

This sounds great Patrick.

What did you use on Linux to get that data?

Got my BeeLink ME Min 3 days ago.

Installed Truenas cummunity edition 25.04.2.1 on WD Green SN350 1TB nvme in slot 4.

Filled the rest with Samsung SSD 990 EVO Plus 2TB

Setup a RAIDZ1 and 1 VDev.

Running jellyfin as an app and all my films (only 700GB)

Been running fine so far.

According to this test:

the Samsungs draw 4.6 W maximum. So you are well inside the specs of the server.

2 Likes

This for all raw data for ONE drive :point_down:

nvme id-ctrl /dev/nvme0n1

This for all raw data for ALL drives :point_down:

for dev in /dev/nvme[0-9]n1; do
    echo "=== $dev ==="
    nvme id-ctrl $dev
done

Here is the tweaked script to format the output only to show the active line 0 :point_down:

echo "Max Power State for 6 drives"
echo "Drive       Max pwr    Enter Lat  Exit Lat   RT  RL  WT  WL  Active"
echo "----------  ---------  ----------  --------  --  --  --  --  ------"

for dev in /dev/nvme[0-9]n1; do
    nvme id-ctrl $dev | awk -v d=$dev '
    /^ps[ \t]+0/ {
        match($0, /mp:([0-9.]+)W.*enlat:([0-9]+).*exlat:([0-9]+)/, a)
        printf "%-10s %-9s %-10s %-9s 0   0   0   0   *\n", d, a[1]"W", a[2]"ms", a[3]"ms"
    }'
done

EDIT: In case anyone still is interested?

I have got the script to the point of displaying all Power States (PS) for all drives showing Make and Model separated by echo “============” .
This is the output (The real output is more colourful) :smiley::point_down:

Power States for NVMe drive nvme0n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
Samsung SSD 970 EVO Plus 1TB         0    7.54W     0ms        0ms         0   0   0   0 *
                                     1    7.54W     0ms        200ms       0   0   0   0
                                     2    7.54W     0ms        1000ms      0   0   0   0
                                     3    0.0500W   2000ms     1200ms      0   0   0   0
                                     4    0.0050W   500ms      9500ms      0   0   0   0
===============================================================================================


Power States for NVMe drive nvme1n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
Samsung SSD 970 EVO Plus 1TB         0    7.54W     0ms        0ms         0   0   0   0 *
                                     1    7.54W     0ms        200ms       0   0   0   0
                                     2    7.54W     0ms        1000ms      0   0   0   0
                                     3    0.0500W   2000ms     1200ms      0   0   0   0
                                     4    0.0050W   500ms      9500ms      0   0   0   0
===============================================================================================


Power States for NVMe drive nvme2n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
KINGSTON OM8PGP41024Q-A0             0    5.00W     0ms        0ms         0   0   0   0 *
                                     1    2.40W     0ms        0ms         0   0   0   0
                                     2    1.90W     0ms        0ms         0   0   0   0
                                     3    0.0500W   3000ms     2000ms      0   0   0   0
                                     4    0.0020W   10000ms    40000ms     0   0   0   0
===============================================================================================


Power States for NVMe drive nvme3n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
Samsung SSD 970 EVO Plus 1TB         0    7.54W     0ms        0ms         0   0   0   0 *
                                     1    7.54W     0ms        200ms       0   0   0   0
                                     2    7.54W     0ms        1000ms      0   0   0   0
                                     3    0.0500W   2000ms     1200ms      0   0   0   0
                                     4    0.0050W   500ms      9500ms      0   0   0   0
===============================================================================================


Power States for NVMe drive nvme4n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
Timetec MS12                         0    9.00W     0ms        0ms         0   0   0   0 *
                                     1    4.60W     0ms        0ms         0   0   0   0
                                     2    3.80W     0ms        0ms         0   0   0   0
                                     3    0.0450W   2000ms     2000ms      0   0   0   0
                                     4    0.0040W   15000ms    15000ms     0   0   0   0
===============================================================================================


Power States for NVMe drive nvme5n1
Model                                PS   Max pwr   Enter Lat  Exit Lat   RT  RL  WT  WL Active
------------------------------------ ---- --------- ---------- --------  --- --- --- --- ------
Timetec MS12                         0    9.00W     0ms        0ms         0   0   0   0 *
                                     1    4.60W     0ms        0ms         0   0   0   0
                                     2    3.80W     0ms        0ms         0   0   0   0
                                     3    0.0450W   2000ms     2000ms      0   0   0   0
                                     4    0.0040W   15000ms    15000ms     0   0   0   0
===============================================================================================

root@truenas[~]#

Here is the reconfigured script :point_down:

for dev in /dev/nvme[0-9]n1; do
    model=$(nvme id-ctrl "$dev" | awk -F: '/mn/ {gsub(/^[ \t]+/, "", $2); print substr($2,1,36); exit}')

    cyan="\033[1;36m"
    red_bold="\033[1;31m"
    reset="\033[0m"

    drive_name=$(basename "$dev")
    echo -e "${cyan}Power States for NVMe drive ${red_bold}$drive_name${reset}"

    printf "%-36s %-4s %-9s %-10s %-9s %3s %3s %3s %3s %s\n" \
        "Model" "PS" "Max pwr" "Enter Lat" "Exit Lat" "RT" "RL" "WT" "WL" "Active"
    printf "%-36s %-4s %-9s %-10s %-9s %3s %3s %3s %3s %s\n" \
        "------------------------------------" "----" "---------" "----------" "--------" "---" "---" "---" "---" "------"

    nvme id-ctrl "$dev" | awk -v m="$model" '
    BEGIN {
        green="\033[1;32m"
        reset="\033[0m"
    }
    /^ps[ \t]+[0-9]+/ {
        ps = $2
        active = (ps == "0") ? "*": ""
        color_start = (ps == "0") ? green : ""
        color_end = (ps == "0") ? reset : ""
        display_model = (ps == "0") ? m : ""
        match($0, /mp:([0-9.]+W).*enlat:([0-9]+).*exlat:([0-9]+)/, a)
        max_pwr=a[1]; enlat=a[2]; exlat=a[3]
        getline
        match($0, /rrt:([0-9]+).*rrl:([0-9]+).*rwt:([0-9]+).*rwl:([0-9]+)/, b)
        rt = (b[1] == "" ? 0 : b[1])
        rl = (b[2] == "" ? 0 : b[2])
        wt = (b[3] == "" ? 0 : b[3])
        wl = (b[4] == "" ? 0 : b[4])
        printf "%s%-36s %-4s %-9s %-10s %-9s %3s %3s %3s %3s %s%s\n", color_start, display_model, ps, max_pwr, enlat"ms", exlat"ms", rt, rl, wt, wl, active, color_end
    }'

    echo "==============================================================================================="
    echo
    echo
done

Now what to do next? :laughing:

Ah! :bulb: maybe work out average power consumption over time by polling each drive every 1-2 secs over 1-5 minutes?

May be difficult as Truenas doesnt allow “bc” command in shell or putty.

Maybe an expert can point me in the right direction?

2 Likes

Nice script! Small adjustment to arbitrary number of drives:

DEVICES=`nvme list -o json | jq -r '.Devices[].DevicePath'`

for dev in $DEVICES; do

I have been using this with 5 Crucial P3+ drives (4x2TB, 500GB for OS) during our latest Dutch heatwave in a room with ambient temperature of ~27-29ºC and it has been running fine for at least a week without hiccups. It acts as an hourly Time Machine backup and file server here.