Hello TrueNas members, since in Switzerland the Sata SSD are more expensive (with 4GB) than the NVME pcie disks, I have been looking for a new quiet energy-saving TrueNas Scale system. To place 4 NVME disks with a 10Gb network card on a motherboard, you need a very expensive motherboard with enough PCIE lanes. While searching for solutions I came across the Highpoint 1104 (https://www.highpoint-tech.com/product-page/rocket-1104). This controller offers space for 4 NVME disks and is switched with one chip. This controller would be very inexpensive, but I don’t know if it is possible to integrate it into the TrueNas Scale without hardware raid function.
In terms of hardware, Raid0 would be provided so that each NVME disk appears individually in TrueNAS and can then be configured as raidz1 in TN.
Does anyone have experience with this controller?
Many thanks for a reply!
Greetings Novell1
Well, that looks like it’s going to pass the NVME devices as individual drives, which is what ZFS wants. So it’s acting like an IT mode HBA.
You really don’t want to use ZFS with hardware raid, if you’re lucky it’s just going to cause endless headaches But since the device doesn’t actually offer RAID, you should be good?
You did not post any information about your motherboard/CPU you plan to use, this does factor into the question/answer.
Checkout my NVMe build, it may help you a little bit. But before purchasing a riser card like the one I purchased, you MUST know that the x16 slot will support 2x2x2x2 bifurcation. If it does not, then it will not work as desired.
If you have any questions, please ask. I’m not the resident expert but I will not steer anyone in the wrong direction. I’d error on the side of caution as I would hope others would do for me.
According to spec
no-Bifurcation dependency or need for a specific brand/model of motherboard
But what a price how you can say Is inexpensive
If you don’t already have the motherboard yet, maybe a one that support bifurcation plus a normal riser (~30-40€) probably cost less
Vendor’s page says it doesn’t need bifurcation. Can’t speak to whether that’s true or not, but some quick googling:
The next obvious question, how well will that NVME switch play with ZFS.
True, for the card in question which is pretty expensive. However if the MB supports bifurcation, a $17 USD board will do the same thing. Otherwise it the MB does not, the user is forced to use a card with a built in bifurcation.
This is why I asked for the MB the OP plans to use as it does factor into a best possible answer. They may be stuck using this card, or may have a significantly less expensive option.
Yep, lots of midrange boards with 4 NVME slots, which should make it easier balancing PCIe lanes/slots for a 10gbe NIC.
Funny thing, with board manufacturers putting more emphasis on x4 nvme slots and Intel/AMD giving limited number of CPU lanes, it’s getting harder and harder to build NASs with consumer parts. Pretty sure I won’t be able to re-use my current 5900x & X570 board in my gaming rig for my NAS. Not enough SATA ports, not enough PCIe lanes for GPU + HBA + 10gbe NIC.
It is always best to build a system using good quality parts, however if a person is not quite sure about a ZFS system, they can use an older system to play around and to see what makes sense for them. My first FreeNAS machine was an old computer, nothing fancy, and four $300 USD 2TB drives. It worked as seemed stable but after many months I figured out that I desired a real system with ECC RAM and enough RAM to “future proof it”. Well there is no such thing as future proof, something always comes along. And then the next system is born.
I have built 3 NAS quality servers and the first non-quality test machine.
I agree that finding enough PCIe lanes in a product is not very easy but when I built my NVMe system, that was a significant factor. Honestly I wish I had one more PCIe x4 or x8 slot on my motherboard vice the x1 slot that I doubt I will ever use. I have a x2 card I really wanted to use, and thought it was a x1 card. Bit me in the ass.
Not 4x4x4x4?
(Blah blah)
Lmao! I meant 2x2x2x2x2x2x2x2
TrueNAS will work fine with NVMe devices behind a PCIe switch - they generally add only a few nanoseconds of added latency, so unless you’re really chasing the absolute bleeding edge of performance you’re unlikely to have ill effects.
With that said, explore the pricing options for a motherboard that has native bifurcation support and the ability to use a passive riser card. Given the cost of the PLX-based PCIe switch cards it might be better to go that route.
This is not a “controller”, it is a PCIe switch (PLX8747), and perfectly compatible with ZFS.
Note that it runs hot and requires good airflow over that heatsink.
If you can find on Ricardo.ch a X10SDV motherboard whose name end in -TLxF, it will be cheaper and more energy-efficient than what you’re trying to build (on-board 10G + bifurcation).
I use a simmilar card which also has a PLX (switch) chip - but mine is also activly cooled by a fan.
You will need to think about that especially when you have 4 nvme ssds installed.
These and the PLX chip get quite hot and passive cooling only wont be enought.
Speaking of the PLX chip, I highly prefer that solution over bifurcation. This can be extremely unstable as mainbord vendors dont really care much about that feature which shows in the quality of the implementation.
The card you linked to is on the cheap end, so no idea about its overall quality, but based on the specifications it will make all 4 nvme ssds appear for TrueNAS to use in any configuration you like.