I don't need a HBA?

I am thinking about upgrading the HBA I have in the small home server (it’s running ESXi but realistically I am only using it for TrueNAS and haven’t found another use case yet /I will eventually/; it’s a remnant from the original all in one setup when the server also ran pfSense which turned not to be the best idea) because of combination of factors like having an irrational upgrade itch and wanting to eventually lower the power consumption where I can. There is nothing wrong with the current LSI 9400-8i, but after seeing the 9500-8i only drawing like 5W, I was like ok I might like this. It’s also NVMe-capable. The card is also ridiculously cheap, so I thought why not, might even get a speed boost I don’t really need, but NVMe is love.
Well, after asking around how would I best mount M.2 drives to a case, looking for an adapter of some kind, someone pointed out that I supposedly don’t need an HBA at all, and that’s where I got super confused, because I always thought HBA for virtualized setup is a must.
Can anyone shed some light into this please?

I don’t know why you would have thought that. What’s mandatory is that you’re able to pass through the drive controller to TrueNAS. In the case of NVMe drives, the drive is its own controller, so passing it through should be fine. There’s absolutely no reason to use a so-called “tri-mode” HBA.

1 Like

Oh ok! I had no idea! I just followed what was being told in guides. I had no idea NVMe was so different. I am not familiar with the deep technical background of hardware.
So if I buy NVMe drives I can just get some sort of an PCIe adapter to connect four of them and skip the card entirely?

If you have motherboard sockets, you can use those. If you want to use an adapter card, you’ll need to know whether your motherboard/BIOS supports PCIe bifurcation, and to what degree.

Right. Most motherboards typically only have two M.2 slots, so I’d need an adapter at any case (currently using 4x1,92TB SSDs running RAIDz1).
I’d definitely need to upgrade the entire system though because this Supermicro X11SCH-F is not up to that job, unfortunately.

Ridiculously cheap? Where are you buying? I’m seeing 300 to 400 bucks…

https://www.ebay.com/itm/135216044254

Can you please shed light on your own question by describing your system and what you’re trying to achieve?

HBAs are for hard drives, but by your third post it seems you do not have any.

Which “job” exactly? At worst, you may need a PLX card rather than a LSI 9500.

Only for hard drives? What? You’re not serious, are you? Did you mean to say “SATA drives”?
I described my system well enough, feel free to reread my post. Virtualized TrueNAS, crappy old Supermicro motherboard. You do not need any other information for the purpose of my question.
I am not trying to achieve anything, the system works perfectly fine. My original mistake was believing I MUST must HBA, but as you could tell if you read the other replies here, it turned out that only applies to SATA drives when virtualizing. I didn’t know that.

This bifurcafuba-whatever thing. I don’t think my motherboard supports it and it’s PCIe 3.0 only on top of that. That wouldn’t work for four NVMe SSDs I believe.

I’m dead serious. And I do NOT mean “SATA only” since SAS drives would definitely require a SAS HBA.

Your first post? No motherboard (it’s in your third post, and it’s not quite crappy), no CPU, RAM or whatever, no drives, no setup of pool(s). Nothing on hardware. And no use case either.
Nothing except that it’s all virtualised.

Even for virtualising TrueNAS with a HDD pool (as most people have) you do not need a HBA if you can isolate and passthrough the chipset SATA controller—which your board can do.

So why are you asking for help or advice? :roll_eyes:

Your motherboard is perfectly capable of bifurcation, down to x8x4x4. That’s good for three drives from the PCI slot, and you can have the fourth on a M.2 slot from the chipset, which, while not an optimal distribution, would work.
Nothing will help with the PCIe 3.0 generation. But you have not mentioned PCIe 4.0 or 5.0 as a requirement anywhere, and for all we know your “4x1,92TB SSDs” (from your third post) could well be PCIe 3.0. And the NAS is never going to serve data faster than the network allows anyway. (Where’s the description of your NIC? Where’s the requirement for any particular speed?)

Oh ok, then please tell me why all the guides and posts about virtualizing TrueNAS in the old forums said you must use a HBA in such case?

You are clearly incapable of reading my first post, so I’m not going to waste more time here. Others managed to understand it, surprisingly.

You’re clearly more interested in arguing than learning. Carry on.

1 Like

That was a serious question.
The guy makes it seem like the ADMINS of the old TrueNAS forums were lying to me, which I find difficult to believe.

They were most likely not lying, but assumed that truenas is not the only vm you’d want to run on proxmox or esxi and if you’d passthrough the motherbord disk controller you wouldn’t have a controller for other vm disk storage. Therefore it was recommended to use an hba for truenas so you can use the onboard controller for vm storage (at least that’s how i always understood it)

1 Like

Right, because it’s impossible that you’ve misunderstood what those un-cited guides are saying.

What specific guide do you understand to say that a HBA is unequivocally required? Where does it say that?

This is the canonical guide for virtualizing TrueNAS:

You’ll note that it was not posted by a forum admin. I invite you to find where it states a HBA as mandatory–because I don’t see it there.

1 Like

You win.

Ah, the classic picking-up-your-marbles-and-going-home approach.

When you double and triple down on a false claim, adding layers of falsehood every time, you’re going to be called on it. You’re allowed to make mistakes–we all do–but refusing to accept correction isn’t going to be received well.

Blablablabla.
Everything was perfectly fine and I got exactly the helpful kind of responses until one smartass dummy showed up and killed the thread. There is nothing else to see here now.

In computing, the robustness principle is a design guideline for software that states: “be conservative in what you do, be liberal in what you accept from others”. It is often reworded as: “be conservative in what you send, be liberal in what you accept”. The principle is also known as Postel’s law, after Jon Postel, who used the wording in an early specification of TCP.

Some here would do well to apply this to their non-TCP communication as well.

People often don’t know what they don’t know. Those who are learning should be open to hearing the input of others and consider they may need to provide additional background or details if they are not receiving the responses they expect; and on the other end, those who are providing said input should take care to phrase their comments and inquiries in a way that is clear and unambiguous.

1 Like