Go check out this guys ebay page that I listed above. I emailed him a bunch of question about the products he sells. He is honest and will actually make suggestions even if he doesn’t have the product in stock.
Marvell chipset works fine for me too, stable 24/7 operation since approx. 4 months (since I have the card) in a Truenas Scale system. Works out of the box. I use the SFP+ version from Trendnet.
I use intel 520da2 cards since like 10 years. I have direct attach copper cables to a unify 10 ge switch. My download pc is connected via rj-45 connection on that switch with a 10ge nic from intel. Rest of my network is 1ge as we have only a fiber isp connection of 1ge, see no reason to upgrade to 4 gbps isp connection. All my unify equipment is now 7 years old and still going strong, i could make a bundle of 4 connections, but the load balancing will not work with one usenet server and one download pc.
Agreed. I just upgraded my small network with two X540-T2 from aliexpress (the ones that include cooling fans). Along with the cheap 2.5Gbe/10Gb SFP+ switches from Amazon and some 10Gbe transceivers, I now have 10Gb between my PC and my NAS. Although Iperf3 reports 9.6Gbe, I routinely get up to 6Gbe in real life use.
True for Core, where I would get issues seemingly at random after months of good behaviour, but just doesn’t seem to matter on Scale as it’s running Linux which should mean drivers really aren’t an issue for most things.
No, it doesn’t, but unless you are buying utter unknown shite you’ll generally find it works and works well. Aquantia being a case in point. Works at first on Core then you get issues whenever it happens to feel like it. Scale → rock solid, no issues.
Drivers are one of the reasons Scale has pretty much taken over from Core outside enterprise buyers.
Right. Which is why my Aquantia NIC didn’t work at all under CORE and would pretty consistently fall over under heavy load with SCALE. I’m speaking from experience here, not simply hypothetically. Switching to the Chelsio NIC I’d installed when I was testing the system with CORE completely resolved these issues. See also:
Garbage hardware doesn’t become less garbage just because someone hacked together a driver for it, which is why our hardware recommendations haven’t really changed with the introduction of SCALE. Sure, more stuff will “work,” if by “work” you mean that it’ll be recognized. But when we’re recommending hardware, we’re looking at what’s been proven to work reliably over a significant period of time.
Put differently, the large majority of hardware out there is, to use your phrase, “utter unknown shite.”
“One of the reasons”? Well, probably, but there’s a long list. “iX had obviously abandoned CORE by the time they announced SCALE” is much higher on the list. Higher yet (by a lot) is the perception that we’ll now have an apps/plugins catalog that doesn’t suck. Somewhere around the same place on the list is Docker. Better VM management is up there too.
Acquantia 10G NICs have their merits (few PCIe lanes, low power), and are certainly suitable for desktop use, but are not designed and tested for heavy duty server use. It seems you’re fine under low load—good for you. @dan apparently puts the NIC under heavier use, and reports collapse under these conditions, Which is a good reason to recommend the safer option of an Intel, Chelsio or Solarflare NIC (possibly also Mellanox in SCALE).
Second-hand server NICs are cheap. Leave ACQ to clients, not servers.
45drives has a new home lab division, 45homelab.com. The store, in addition to selling 45homelab enclosures and systems, also has a varying assortment of server parts including main boards, NICs, HBA, memory, etc. All are market catch of the day and many may be surplus stock from 45Drives inventory. These folk are worth keeping in mind. Hardware#parts
…and it isn’t even under that heavy of use–trying to copy a handful of multi-GB files at a time to the NAS over WiFi would pretty consistently result in the NAS losing connection. The WiFi access point is connected via GbE, so it can’t possibly be pushing over one Gb/sec. But something in the workload would result in the NAS (briefly) dropping offline most of the time, and that hasn’t happened since I started using the Chelsio card instead.
…and their prices seem ridiculous. Almost $400 for a X540T2? That’s around $100 anywhere else. Over $800 for the XL710? Again, more than double the price elsewhere.
+1 for this. Excellent seller. He makes great YouTube content teaching things like how to configure old LSI cards to run in IT/HBA mode, and is really helpful if you’re not sure exactly what you need. Great customer service.
…I was a bit surprised to get tagged into this thread, but. Hello.
As long as I’m here, I noticed above that @dan mentioned Mellanox is considered to be “a step down.” That’s surprising, and a bit concerning since I own one.
@dan , I’d love some more info (or a link to read more info) on the issues with running Mellanox cards. I was planning to drop a ConnectX-4 Lx (to be used with 10 GbE SFP+ DAC cables) into my UGREEN DXP8800 Plus–I’ve already flashed the Mellanox card with the latest firmware from NVIDIA.
I’ve also got an Intel x520-da2 free and ready to use, but I was concerned about it being too old to still be stable under modern Linux. Apparently, that’s not the case?
I do know that the x520-da2 is very picky about SFP+ to RJ-45 adapters, but I’ve only ever used it under OPNSense, so that could be a BSD thing.
Swapping out PCIe cards in these things is enough of a pain that I’d rather get it right the first time.
In the meantime, I’m going to go stand in the corner and count backwards until I stop being so annoyed that I’ve only got one PCIe slot to play with and two useless Thunderbolt ports.
Both Linux and BSD mlx5 drivers for CX-4 and up are fine. The were problems with BSD (and thus CORE) 3-4 years ago and the experience seems to have traumatized and scarred the poor grinch. AFAIK there’s one outstanding bug on the BSD side concerning SR-IOV and a fix has been committed already.
From what I see, if you don’t want a rackmount server, they have some decent options.
But, although the 15-bay tower is a better design than most (e.g., fans in front of the drives), for $1100 you can buy a used Supermicro 846 with 24 drive bays and 2x redundant power supplies.
Also, that 15-drive direct-attach backplane is going to be a cabling nightmare. Add in the fact that without an upgrade, only 7 of the drives can be SAS, and there are too many bad features to make it worthwhile at that price.