Hi, long story short I need help deciding deciding between a basic sata expansion card with an ASM1166 chip or going for an LSI card. My problem is my mobo has one x16 slot which I was going to use for a gpu for a plex server. This leaves me with 3 pcie 3.0 x1 slots (they are x16 in size and x1 electrically). due to the fact that the board only has 4 sata ports I was looking for a way to expand this. my choices as I see it are either a sata expansion card, or getting an LSI off of ebay. the two solutions would be similar in price for me to hit the amount of drives desired. However, I worry about the performance of the LSI in an x1 slot. If anyone with more experience in the matter could help me that’d be amazing.
I was wondering about the performance of an HBA on an x1 slot as well recently. My $0.02 would be that I’d rather have a slow but functional HBA than risk expansion cards corrupting my data.
…Have you given thought to testing the GPU performance for transcoding on the x1 slot before you purchase either? I honestly don’t think that the data transfer for transcoding 1-4 streams would actually saturate the link realistically.
I hadn’t actually thought about this but you bring up a good point, transcoding doesn’t require a large stream of data to the vram on the GPU so I imagine it could potentially work in a x1 slot, based on a quick google search someone got it working perfectly fine in a pcie x4 slot so testing it in the future on a x1 slot wouldn’t be a bad idea, I’ll more than likely give this a try and if it doesn’t work I’ll deal with a slow HBA because I definitley agree that a slow HBA is better than having my data corrupted.
Report back; I’m curious to the findings both ways!
Another concern is whether the x1 slot can provide enough power for the card.
How many drives in total?
Overall, wrong choice of motherboard. Maybe replacing it could be the better solution?
Are we talking in terms of available wattage from the pcie slot itself? If so, its just on the first couple of pins that provide voltage & the rest, regardless of x1/4/8/16 are data.
Or we talking something else?
I am indeed talking available wattage from the PCIe slot itself. Power may be all provided by the first pins, but per specification total available power is still dependent on the size of the slot and card—and I would not make any bet as to what a consumer board with “x1 electrical in x16 mechanical” may implement…
Interesting; I’ve never thought about the power delivery on a motherboard being unable to supply enough wattage through different sized pcie slots as I assumed it’d be the same rails & equivalent sized pins.
Wonder if the pcie power pins would cover the needs or not if it was an issue.
For a GPU additional power pins would cover the needs… but a small transcoding GPU may not have these because it expects to be fully bus-powered (< 75 W).
As for generic PCIe cards, we have the case of LSI 9300-16i/e cards which come with a Molex connector for extra power and need this to be plugged because they can use more power than the 25 W delivered by a x8 slot according to the PCIe specification! (x8 is the same as x4, the next step is the full x16)
A PCIe 3.0 x1 slot is 1GB/sec. A typical hard drive will average around 125-150MB/sec.
So, if you have fewer than 8 drives, you probably won’t see any performance issues.
I have such an experience but on two different (similiar) machines.
Quite answer: not much of a different, but with caveat.
I have two machines, a Dell Precision T3500 (with eSATA) and T3600/T3610 running a dell PERC card, both over a 1gbps network. No practical difference - as the 1gbps is the bottleneck. However, the PERC cards are more reliable physically than the eSATA cable / external enclosure.
I have encountered no reliability issue that everyone feared but I’m mostly running the PERC over a different NAS system (fnOS, over btrfs and zfs).
Also, my particular PERC card takes two connections so that may be a factor as well.