5x / 6x / 8x SATA M.2 PCIe Cards

IIRC, those were Marvell. And one of them needed a special firmware update that was unobtainium other than through iX. And it still caused trouble occasionally.

ntel AX210

“Unraid” and “stability” do not go together without glue in the form of the negation prefix “in”. As for TrueNAS, Citation Needed .

LOL. Give me a break! People have been using unRaid reliably for a long time . But sure, make uneducated assumptions if that makes you feel better.

Actually, they do, by virtue of their market position as feedstock for Super China Happy Sun products. There is literally no other use for them.

SATA port multipliers go back over two decades and allowed many more hard drives to be connected to a sata controller when putting that many sata controllers in a server just wasn’t possible. It was created for a use case and still exists today because there is still a use case for some. The technology isn’t inherintly wrong because you say it is. If implemented properly it shouldn’t cause instability on decent controllers that are designed to properly handle it. Just the fact that it is poorly implemented in many cheap products from china doesn’t make it ineherintly bad.

Perhaps one day better port multiplie support will exist in Truenas.

Fact check bro. Intel’s ACHI specifications do support port multiplication. That doesn’t mean it’s implemented in all their products. Possibly not in any current ones, but it does remain in the specification. What people spend time and money at the end of the day doesn’t rely on your opinion. You are welcome to your opnion, but at least actually stick to facts if you are going to present things as fact.

I didn’t expect the first entry on my ignore list here would happen so soon. Ah well.


Name a single serious server that uses a SATA port multiplier. I will not hold my breath.

Try “not implemented in any products”. As a representative sample, here’s a quote from the Lynx Point/8-series PCH datasheet:

Port Multiplier Port (PMP)—R/W. This field is not used by AHCI

Why is that relevant? That’s the only mention of Port Multipliers outside of some remnants in the GPIO portions of the datasheet. That’s Intel stating “we don’t support Port Multipliers as defined in the AHCI specification in this present AHCI implementation”.


On a moderation note, I will treat any further statements of fact on port multipliers which are not accompanied by relevant citations as being made in bad faith.

For those with an ASM1166 based controller, may want to delay upgrading to Dragonfish

1 Like

Not entirely clear to me if it’s a controller issue or if there’s a port multiplier in the mix. I think some folks are starting to see port multipliers where there only crappy AHCI controllers, which makes things hard to follow…

I am the author of the post @Stux referenced above (although now I’ve seen others reporting the same issue), and in my case the product is the “MZHOU PCIe SATA Card 8-port” (link) which I’ve come to understand (mainly from this blog post) is based on the ASM1166 SATA controller, which is a cheap SATA controller with less bandwidth than some but no port multiplier, combined with a port multiplier to get the extra ports.

So I’m not sure that the issue would occur with an ASM1166 card without the port multiplier. There are now multiple bugs filed by different people (NAS-128478 being the main one, the others are duplicates of it) so hopefully that will help disambiguate the specific hardware affected.

FWIW, though, in this and the other thread, as a “casual” user of TrueNas (shared storage and backups for my kids, etc) I found the snarky comments around port multipliers rather helpful. I’ve always thought that if they work, they work, and they just might be slower and less reliable than enterprise gear. I’m not gonna ever have ECC RAM or enterprise SSDs in my home TrueNAS server, because it is always going to based on my kids’ last gaming PC that finally got too old to work for games. (Currently, I’m rocking ASRock Z390 with a 9th-gen Intel CPU.) So I didn’t think I needed an HBA either.

But the “well, duh, you’re using a port multiplier, bro” comments did spur me to dig into the docs/blogs a little more and — TL;DR — after reading various info and experiences, I did, in fact, order a used LSI HBA. (Or what purports to be one anyway — I live in Japan, and the eBay ones plus shipping are quite expensive for us here, especially with the collapse of the JPY, so what I ordered might be a fake — we’ll see.)

EDIT: Also, if the problem does happen to your machine, it is immediately obvious and you can simply downgrade using the web UI to 23.10 and all the data is fine and usable again. So, not a big risk in updating.


@axhxrx thankyou for the candid reply

I’m not trying to disparage people’s hardware, but rather work out what works and what doesn’t. Unfortunately there are very good reasons why Port Multipliers are a bad idea, what I’m not so convinced about is that all SATA controllers are inherrently flawed.

At least by having this thread, we can try and get some up-to-date experiences from people now that TrueNAS is based on Linux, rather than BSD, which does make a difference to its hardware support.

Hopefully we can get to the bottom of it… Personally, if I needed a couple of extra SATA ports in my build I would probably use a SATA M.2 card… but it seems that Port Multipliers are simply a no-go, especially at the moment with Dragonfish

Sure. To be clear, I didn’t find your comments snarky at all, I was more referring to the memes like this. (But even those kinds of things are actually pretty helpful to help figure out Hmm should I allocate time to understanding this better?)

1 Like

The point is that none is playing around but truly using those adapters: one thing is saying “everyone, here Is this new thing I will test and come back to you” another is “Hey, why things don’t work? I’m using this cheap thing btw”.

We welcome everyone willing to test and give us feedback, even when it has been proved multiple times it does not; we do not welcome people making baseless claims.

If you see things that way you have an issue with how you view things. Speaking from experience and knowledge and suggesting using proper, reliable hardware its one of the strenght of this community; are you expecting a tech forum to not help and not educate?

You do not build a NAS with a mini pc here.

Either you have not looked into the old forum or you don’t know how to search.

I would argue that choosing unsuitable hardware is bad planning.

Everything has already been answered by eric, and I will just say that your attitude is totally wrong if you care about this community (which you clearly do not) and want to make a change.

That’s a frequent dabbler’s honest reaction when seeing someone it’s using port multipliers, I would not see that as snarky.


JGreco’s resource about port multipliers does state that some, but not all, Marvell or ASMedia SATA controllers are acceptable. If you’re absolutely desperate to add a couple of SATA ports in the smallest possible footprint, you may want to dig further with that but it’s going to be more validation trouble than a $50 M.2 card is really worth. That energy would have been better spent trying to find a small footprint motherboard with enough native SATA ports.
All occurences of port multipliers are bad. Period.

Chipset ports are good.
LSI 2008/2308/3008 and their OEM versions from Dell/HP/IBM/Lenovo are the way to go to add more ports.

These are simple and safe rules. HBAs can be found second-hand for less than what a “good” (or “acceptable”) SATA-ports-on-M.2-stick costs new.


This is true. I read the specs. It’s incompatible with ZFS’ idea of a reliable and queuable block device.

1 Like

Allow me to jump in here.

We can certainly encourage/discourage certain component choices, but ultimately each user is responsible for their own decisions, their own level of acceptable risk tolerance, and their desired system form factor/budget.

It’s important to note that M.2 is only a form factor decision - bite-sized PCIe is still PCIe - but the crux of the problem rests on three major issues:

  1. The chipset used to provide SATA connectivity (including driver support)
  2. Manufacturing quality of the device itself
  3. Presence or absence of a port multiplier

While SCALE has expanded the playing field relative to CORE (where “Just Buy LSI” was the rhyme of reason) - there are still definitely “tiers” of driver support and functionality within the Linux kernel, and the vast majority of M.2 SATA controllers use a “lower tier” chipset. Intel and AMD are preferred over all others, Marvell and ASMedia share the next space, and JMicron brings up the rear (but has made improvements recently.)

A poorly manufactured or improperly cooled device can of course contribute to instability - whether the chip responds by temporarily throttling commands or running to thermal failure though is another question. M.2 slots traditionally do not get the same quantity of airflow as traditional PCIe, and this is exacerbated by the “cable spaghetti” that comes from these cards. Prefer cards with heatsinks, if possible.

Finally, port multipliers - the thorniest piece of this argument. Quite simply, they are to be avoided for multiple reasons.

The first is the bandwidth argument - the most common multipliers are 1:5 - so one SATA port worth of bandwidth being spanned across five downstream devices. That means your 600MB/s of theoretical SATA3 bandwidth is cut into 120MB/s per device - a definite limitation for SATA SSDs, but it can also be a limiter on sequential I/O to spinning disks. With sequential resilver and scrubbing, this is a very real thing to hit.

The second is the nature of the port multiplier itself - if it’s using “Command-Based Switching” then you can basically have only a single command queued against any of the devices behind the port multiplier, and all others are blocked. This means that your performance tanks even more than the bandwidth cap, because it’s like having a common media akin to a “hub” vs a “switch” in the old networking sense. If it’s using FIS-based (hardware) switching, then it’s able to queue up against multiple devices, but it’s still sharing the bandwidth.

And finally, if a single device behind a SATA port multiplier hangs up or fails to respond to a command, it could (“will” in the command-based switching model) cause all devices behind the PM to be non-responsive. Not exactly a good thing as RAIDZ6 doesn’t yet exist. :wink:


This is absolutely the key.

I have a few machines that are 1 liter or smaller and have some data they share with other machines on the network. I would never consider calling them a “NAS”. Maybe someday when the bugs on the multi-M.2 micro motherboards are worked out, we can have a real NAS in a tiny form factor.

1 Like

Linking to a similar thread on old forum for posterity


I think something many of the moderators here ignore is that there are plenty of people who use Truenas in unsupported unreccomended configurations at home who also use it in a professional enterprise setting. I was an Isilon Integrator before eventually using Truenas in many projects. My issue isn’t that people have differing opinions.
My issue is with the often condecending attitude by some. This insessant whining not to do something a particular way in someone’s home lab experiment. Sometimes the responses by moderators are plain rude, if not helpful at all. I mean I was threatend action by @ericloewe, for merely stating my opinions supported by facts, for not citing my sources. Despite those facts being pretty easy to find with a simple google search, I didn’t see anyone else citing anything. I was then promptly suspended from this forum, for which I appealed and got reinstated pretty quickly. Granted he blacklisted my IP too so now I have to login with a VPN. I’m not holding my breath that I won’t be prompty suspended again just for posting this reply, but I feel that it needs to be said.
At this point I am not going to respond to every single point in your reply because honstly it seems it will just be a neverending back and forth. I will respond to one point though…

You do not build a NAS with a mini pc here.

LOL. “You don’t”. I do and I will. I have every intention of doing so for my own home built DIY solution for non critical data. I’ve ordered a Dell Optiplex 3000 thin client N6005 which I will be using a ASM1166 m.2 to 6xSata adapter in a custom 3D printed case with 6 sata drives in a RAIDZ2 pool. Why? Because I want something low power for home lab use and it’s a fun project that I will document on my youtube channel. Don’t worry, I will document and post on here too so I can listen to a whole bunch of ridicule me on why I shouldn’t be dong that. Assuming I don’t get suspended or banned again, LOL.

There are ones that work reliably, it just requires some research and thorough testing. Plemty of people have home lab builds that they use for non-critical data. Home lab or not, ciritical data or not, it is still a NAS. I have done several non-supported, non-reccomended builds for personal use that are still working great many years later and proved to be far more resilient and reliable than your standard off the shelf NAS RAID solutions. What it all really boils down to is people like you saying Truenas is not for DIY home build use unless done the same way one would deploy an enterprise solution. The reality is that neither you, nor anyone else gets to decide that. You are of course welcome to your own opinion and to caution all day long as why people shouldn’t.

Also, I just want to say a big thankyou to @HoneyBadger for responding with something informative, thoughtful and helpful! While I have come acrosss most of this in my own research I never really got to that point of expaining any of this through the barage of fairly condecenting messages saying Truenas is not for that. Not to mention being suspended unjustly for having disagreeing opinions with a moderator. We need more people like you offering helpful advice and information instead of just condecendingly telling people not to do unsupported builds in their home lab environments. Sure, caution against it poltiely if you will, but the rudeness needs to stop. Thanks again for a breath of fresh air on here.

1 Like

I have to say I agree with @William_Steele here.

There is nothing wrong with someone building a TrueNAS system with 4 18TB disks on the oldest, slowest, 64-bit Intel processor you can find, only 8GB of non-ECC memory, an ancient RAID HBA with SATA 1 channels, and without any redundancy whatsoever i.e. a terrible spec. which is asking for trouble PROVIDING that they understand just how badly it will perform, just how at-risk their data is and just how little support they will get when it all goes wrong.

Of course if money was no object, and physical space was no object, and electricity usage was no object, we would all have Rolls-Royce rack-mounted bespoke-built enterprise-server-based solutions with RAIDZ3 and hot spares and dual power supplies and dual network infrastructure, and building-wide UPS. But home users, small businesses, hobbyists, generally have constraints on the money, space, time, effort etc. and are willing to make some compromises on security to get what they need within the cash and physical space etc. they have.

These compromises are mostly going to mean less redundancy than you might like, perhaps single points of temporary failure, and requiring double failures to lose data. Since many of us store e.g. media files which are non-critical and still have much better redundancy than a non-redundant single disk that the data was previously on, these can easily be levels of risk that we can accept because they are already significantly lower risk that we previously accepted.

There is nothing wrong with making compromises to get a useable system with acceptable levels of redundancy vs. risk within the budget and space you can afford - just so long as you understand and accept the risks created by the compromises you choose.


P.S. That said, I think it is probably quite easy for inexperienced users to cobble together a system that has the right number of ports but which is going to perform poorly or put their data at risk of corruption. In other words, they won’t understand the risks they are creating.

Indeed, despite having been in IT since before DOS PCs existed, and despite having done masses of projects including huge RAID subsystems and complex networks - even I almost succumbed to the "throw together whatever hardware has the right number of ports) and it was only the fortunate fact that the motherboard I first planned to use was dead-dead-dead that pushed me to buy a RAID appliance instead and have a couple of limited compromises rather than something that was architecturally flawed.

It is far easier to give a simple rule-of thumb to use LSA HBAs flashed to IT mode for SATA ports and avoid a whole bunch of unknown potential gotchas resulting in data loss at a later date.

1 Like