5 x 4TB NVMe SSD RAIDz1 Array Repeatedly Corrupts

​I attempted to build a 5 x 4TB NVMe SSD RAIDz1 array, on my Beelink ME Mini, numerous times. Every attempted failed.

Here are further details about my setup, for h/w and s/w, what I observed and the many ways I tried to get this setup working:

I purchased a BeeLink ME Mini on 11 July. It arrived on 26 July 2025.

I also purchased five Crucial Crucial P310 4TB PCIe Gen4 2280 NVMe M.2 SSDs and installed them in the ME Mini. The BeeLink ME Mini that I purchased came bundled with one, Crucial P3 Plus 2TB NVMe M.2 2280 SSD.

I installed TrueNAS Scale 25.0.4 OS on that 2TB SSD, and booted from it.

Here are the Trouble Symptoms that I observed:

  1. No matter how many times I tried, I could never get 3, 4 or 5 of those, brand new, Crucial 4TB NVMe SSDs to live in a RAIDz1 array for very long without the array being reported as corrupt by the TrueNAS OS. It would report over 100 errors on 2-3 of the 4TB Crucial NVMe SSDs. I completely deleted and re-installed the TrueNAS Scale 25.0.4 OS and rebuilt the RAIDz1 array numerous times. I observed the same rather speedy deterioration of the 5-disk RAIDz1 arrays that I built and rebuilt. Sometimes the corruption would happen as the array was first being constructed. Other times the array corruption wouldn’t start until I copied 300+ GB worth of my files onto it and then started to play my media, or open a file. But the array corruption happened every time.

  2. Further, every time, one-three of five SSD drives would get marked as failed, and marked as offline or unreachable, by the TrueNASCCE OS. Then I invarriably found that, at that point, I could no longer get data back, for each SSD that TrueNAS had marked as failed, with smartctl or sensors.

  3. Every time, I was able to take the Crucial 4 TB NVMe SSDs, that the TrueNAS OS had marked as failed - and that became unreachable via smartctl, out of the ME Mini and into a USBC3 external NVMe drive enclosure and move that reportedly failed drive over to another computer and partition and lay a filesystem on that reportedly failed SSD. I found that I could write the same 300+ GB of files onto it, and read and play files from that drive, for over 24 hours, before I stopped testing, with no problems at all.

  4. I found that different SSD drives, in different ME Mini drive bays, were getting marked as failed, as I re-tested building new RAIDz1 arrays. But each 3, 4 or 5 4TB Crucial NVMe SSD RAIDz1 array went corrupt and TrueNAS would mark, usually at least two, SSDs as failed.

  5. So I purchased five Kingston SKC3000D/4096G M.2 2280 SSDs and tried them in the ME Mini. I observed the same results, as I have reported above, - with the Crucial 4 TB SSDs, but this time with the Kingston 4TB NVMe SSDs.

  6. Even though it was not what I needed, and just for test purposes, I built first one and then two, 2 SKC3000D SSD RAID1 arrays. Then I tested writing and reading data to and from both arrays, also over a Samba share. I observed no troubles with either 2 SSD RAID1 array in the ME Mini, for over 24 hours worth of testing.

  7. Then I tried replacing the TrueNAS OS, on the ME Mini, with Fedora Server 42 and then with OpenMedia Vault 7.X. Both were the latest stable downloads available for those OSes, when I downloaded them. I tried building a RAID5 array on the ME Mini with 5 of the Kingston 4 TB SSDs and observed corruption of the array before, or very shortly after I ran the command to build that five SSD RAID5 array. This very similar pattern of reporteed array corruption, tons of drive errors reported, and 1-3 Kingston 4 TB SSD drive(s) being marked as failed, by each of those two additional OSes. happened again very quickly, with both OSess - on the ME Mini.

I really like the ME Mini form factor and design - with the metal frame and thermal tape between that and all six internal NVMe drives. That said, I purchased the ME Mini to use as a NAS, with the drive failure resiliency of at least RAID5 or RAIDz1. That is why, so far this first time ME Mini ownership experience is not working out for me, at all.

Have you ever heard of such a set of symptoms?

Are there some True NAS Scale 25.0.4 settings that I should, or could have tried?

Is there some firmware patch or BIOS setting that I missed, to make this setup, as reported above, work?

TIA for your ideas.

Take a look at:

2 Likes

Wow, sorry to hear about your issues. I’m providing the following to at least show that what your trying to do can be done.

My Mini ME is running TrueNAS Scale version 25.04.2.1 with (6) Crucial P3 2TB PCIe Gen3 NVMes and I’m booting from the eMMC. As my array is a stripped mirror this appears to be the only difference between our systems.

Thus far after 2 days of beating on the system Ive seen no issues.

My average NVMe and CPU temps hover around 45C and my max power usage thus far is ~30 watts. As expected I cannot get the max out of the drives as I saturate the NICs pretty quickly since they are connected to a 1GB switch.

The only thing I cannot get to work reliably is LAGG to aggregate the two NICs. It works but after about 5 minutes the NICs go belly up. Not sure if this is a Unifi or Mini ME problem. At some point Ill probably do some more testing against a different brand of switch to help identity the culprit.

The only thing I did with regards to the bios was to disable CMS, Secure Boot and the wifi/bt card.

Hope that helps. Thus far I’m very happy with the Mini ME.