5x / 6x / 8x SATA M.2 PCIe Cards

I know port multipliers are bad, and all that, but dang these things are getting interesting

https://www.amazon.com/Port-Non-RAID-SATA-Adapter-bandwith/dp/B0BGJPDL8N/

“Interesting”, yes. But the million-dollar question: Would you trust your precious data on it? Would you trust it to work reliably in the longrun for your ZFS pool?

I think, long run, with Scale being Debian based, iX is going to have to put effort into making sure TrueNAS works well with these things.

The vast majority of users are just going to bang whatever source of sata ports they can get into a chassis. And that’s if they aren’t just using a usb enclosure.

The days of people getting LSI HBAs as a matter of course are probably over.

1 Like

No, I doubt that iX needs to be testing and directly stating these types of devices are supported.

Part of the problem is ZFS scrubs might heat those chips up to the the point of irregular behavior, meaning errors could pop in that are not on the storage / disks themselves. Potentially solvable with miniature heat sinks…

Another part of the problem is that these things may show up every month or 2. Buying and stating they work, for non-paying customers is outside normal business practices.

Now, if they were a more normal PCIe SATA expansion card, yes. And if the M.2 version of that PCIe card, using the same chip(s), were to be made, a simple “May work based on chip(s) used, no guarantee of suitability with TrueNAS can be made.”

3 Likes

First review I see is titled “Works out of the box with TrueNAS Scale on a GMKtec G3 N100”; dated 1st March 2024.
Really wish people would read the available resources.

The next review under that is titled “worked for a while then no longer detect SATA drives”

:wink:

I wouldn’t exactly entrust my data on a ZFS pool to that adapter.

3 Likes

Totally agree; moreover, we know for a fact that HBAs can get really hot… that thing does not even have an heatsink.

2 Likes

Some of the other ones do have heat sinks. Seems to be a point of difference.

What’s “interesting” when you can get a proper LSI HBA for less?
It’s not even clear how this thing is built. Two PCIe x2 4*SATA controllers on a switch? I do not see the switch.

It will attract kiddies who do not know about HBAs, who are positively afraid of HBAs, who know nothing outside of consumer/gamer-grade hardware, and/or who build their NAS around a gamer motherboard with gobs of useless M.2 but not enough PCIe slots. None of that is conducive to a good ZFS NAS. And there’s no hope of ever being able to test and validate the endless stream of dubious SATA controllers coming out of Shenzhen.

Allegedly it uses the JMB585. Also, they state a maximum sequencing read/ write speed of 850 MB/s.[1]

But yeah, it uses fis-based switching[2] so… beware, here be dragons!


  1. 8 port Non-RAID SATA III 6Gbp/s to M.2 B+M Key Adapter PCI-e 3.0 x2 bandwith - SI-ADA40170 ↩︎

  2. aka port multipliers ↩︎

2 Likes

If it’s a 5-port SATA controller with multiplier to reach 8, it’s not suitable for safe use with ZFS. Period.

1 Like

Which is a pity, because the controller seems good. Give me an NVMe with just the JMB585 and the product suddently becomes doable: 5 SATA ports for an NVMe slot allows me to use my PCIe slots for other things (like NVMe or 10Gig cards).

Actually, give me two JMB585 in order to use a single PCIe 3.0 x4 slot for 10 SATA ports.

Looked again at the controller’s diagram: it has RAM memory… and now I do wonder about potential corruption of data, corruption that would not be know to ZFS. Can anyone chime in about this?

I doubt this is ECC

Screenshot_1

IF the controller behaves, and port multipliers are kept out, it might be usable.
But do we want for people to actually use it, come for help… and then we have to dive into details, how every single port is wired and keep track of which drive is connected to what? If not dragons, there are at least major headaches ahead.

Two JMB585 and a switch for 10 ports on a M.2 stick? Not without special dense and tiny connectors; as pictured above, two SFF-8087 get us to 8 on a M.2 2280, with no room two more ports. On a PCIe card? Yes, there’s space… but then a regular LSI HBA would provide 8 ports, extendable at will with a $30 expander. (These can work with a 4-pin Molex for power, no need to plug them in a slot.)

At worse, if the motherboard has too few PCIe slots available and too many M.2, there are these:

2 Likes

Bookmarked.

What effort can they put into making sure this stuff works? So many of the concerns are fundamentally at the hardware level for a device that’s never going to be solid enough to actively recommend or provide as an endorsed solution.

There is a big difference between using these sorts of adapters on a non-mission-critical, non-performance-critical, non-data-integrity-critical system, and using it on a NAS.

Just because one user has found that it works with TrueNAS when they first tried it, doesn’t mean that it is a good idea for long-term mission-critical data.

And realistically, no software developer can ever test or support the myriad variations of technology that the Chinese put onto the market.

(As extreme fun example see this section of a Linus Tech Tips video “The WEIRDEST PC Parts we Found on AliExpress” where Linus cobbles together the weirdest chain of adapters to connect a network card - imagine the same thing for a SATA disk, and then ask how any organisation could support even a fraction of this?)

2 Likes

It’s already happening. People are coming to these (those) forums with these things

So, I’m gathering

  1. port multipliers bad (yes, it basically stops all the other multiplied drives responding while one responds, thus dropping drives. This is bad for ZFS

  2. M2 is not a PCIe slot. This is not a valid reason. Occulink is also not a slot.

  3. No heatsink means over heat. Not all don’t come with a heat sink.

  4. Ram corruption. Also not an issue. Zfs protects against data corruptions enroute from its ecc to the storage.

  5. Reliability. Yeah. That’s your problem.

  6. SATA port adapter. Apparently that’s not the problem either.

Sure. It had already happened in the old forum :cry: —typically in “HELP! My pool is degraded” threads, solved by a detailed hardware listing showing the culprit.

M.2 is a slot which can provide 4 PCIe lanes… but not necessarily enough power for demanding devices. You couldn’t power a SAS HBA out of a M.2 slot, so these do not come in M.2 form factor.
A well-behaved, low power, SATA controller on M.2 might be acceptable, but it’s hard to sort out the possibly acceptable from the disaster-waiting-to-happen.

“Get a SAS HBA and do not touch these M.2 thingies” is a simple and SAFE message for ZFS builds.

2 Likes

The real threat is mostly a port multiplier or a hardware raid controller.

Uhm not always! That’s one of the issues in using hwardare raid controllers with cache… if ZFS believes to have written X but the controller writes Y the checksum could not notice. That’s why it’s crucial to use ECC.

1 Like

Yeah. That’s again a different thing.

Drives/Controllers must correctly implement sync write.

Controllers could have a memory buffer, but as long as they only respond when the write is completed.

Again, this is part of my point. People are going to use these things (and the PCIe equivalent). Which ones are broken, chipset wise, vs those that are merely crap.

2 Likes