Hi,
I’m sure that this question comes back at a regular interval but here is mine !
My requirements are for a VMFS storage (using something like iSCSI), for a 3 cluster VMware (Yes, some of US still need to learn and try new thing on this platform, LOL).
I have 8x 1TB NVME that I would like to put in a RAIDZ5. This system would, Idealy also support a JBOD card (that I also have).
So, the RaidZ5 NVME would be for the VMs and JBOD for simple file storage (like a file server).
Right now, they are all in the 3x cluster (Dell R630 servers, Using vSAN). Problem with the current setup is that each time I want to do a cluster upgrade, the NVME disks are not the VMware certified list so it wont let me start the upgrade.
Idealy, I would like something AMD Rysen (I know they support more PCIe lanes than intel so more nvme support).
I Already have 3x (4x Nvme bifurcation cards) that I can use in a new server. 2x would be enough though.
Anyone knows of a motherboard that would support all of this ?
The rest, I can manage as I already have 2 trueNAS servers. just not in this kind of config.
You’re right… I meant RaidZ1 within the ZFS world.
the JBOD card is indeed in IT mode. TrueNAS does not like hardware raid card controllers.
But, looking a bit on the internet, seems like i’m fetching a dream as AMD’s AM4/AM5 does not support more than 1 PCIe x16 slot in 4x4x4x4x mode. that fall into threadripper territory. Those are a bit to expensive for my homelab setup.
1st oder second Gen Intel Scalable, aswell as EPYC Naples and Rome are available as relatively cheap bundles (mainboard, cpu and Ram) on ebay, and offer plenty of PCI lanes.
EPYC cpus are alot cheapter than threadripper, i’ll give you that… but they do get you at the motherboard prices.
I’ve seen a lot of great epyc specs cpus but they all come from China… can they be trusted ?
I haven’t seen any bundles on Ebay yet. still looking.
What I need is a Mobo/CPU that will support 8x NVME, 10GB/25GB(if needed) Network card and finally, the jbod (IT) card. thats alot of PCIe slots… At least 2x 16x for the bifurcation adapters + 2x4x for network and JBOD
Common wisdom is that would fare much better on mirrors. Primarily for IOPS, but also because raidz is highly inefficient at writing small blocks. A single NVMe drive could have enough IOPS for your needs, but the second point still fully applies.
Not really for desktop CPUs. Though a Ryzen/Core/Xeon E with a PLX switch would do fine running 8 NVMe drives from a x16 slot.
EPYC does have more lanes than Xeon Scalable of the comparable generation… but even an old 1st gen. Scalable would provide enough lanes.
Here in Germany there are dealers selling complete used Supermicro systems including Dual Xeons, 32/64G of RAM, 10G Ethernet, dual power supplies for a few 100€. Ok, nothing you would like in your bedroom, but still more than enough even for today’s needs.
Yes but that would give me only 4TB total raw storage… remove the 20% ish for ZFS health and all sorts… that would not leave alot of space out of 8TB total disks.
I mean a simple JBOD SATA controller card. nothing fancy. tu utilize spinning disks for file storage… if i’m going to build a good storage server, I should at least get it to server multiple pools types (nvmes for vmfs and sata for general storage, not connected to vmware but at the VM level)
Okay, if it has worked well for you in the past, that is a good sign.
I just one final point. When you write HBA, most people here will expect something that would be happy in a (real) server, can do SAS as well and has 2 or 4 SFF-8087 or SFF-8643 connectors.
Your descriptions sounds like a expansion card for additional sata ports. Some of those with older asmedia chips have been known to be problematic when put under pressure by high zfs workloads.
If you run into problems, and it is indeed this type of expansion card, that will be very likely the first thing people will point to
Yeah, I know, that is why I wrote the somewhat convoluted sentence “When you write HBA, most people here will expect something…[that] can do SAS as well”.
Calling it an SAS HBA like @etorix did is probably the easiest and best way to distinguish between the two.
So, besides the HBA/sata bus controller, any advise on the X99 motherboard as a NAS controller ? I haven’t found the mother board specs yet but anyone can chime in in the meanwhile ?