My use is for three SOHO activities. I’m an architect, so I use CAD (2D mostly, some rendering/graphic design) at desktops/workstations in the LAN. These are hypervisors actually, fedora / kvm. I run communications for a local political campaign, that’s sometimes graphics intensive, but mostly just something between home and a small secondary school classroom (not the media lab, though).
Are you connecting to SMB shares and just accessing files that way? Is this going to be block storage and you’re doing iSCSI/NFS for VMs?
Also, I don’t understand the comment about performance of SATA vs SAS. SATA III has a 6Gb/s rate, while the SAS HBA, and the SAS HDDs both have 12Gb/s rates. The HBAs are PCIe x8, and the PCIe slots on the board are PCIe x8 vIII, so I don’t think that’s a bottle neck?
Basic gist: SAS is full duplex (can speak and listen at the same time for data) and SATA is half duplex (can only speak or listen). SAS includes some queueing mechanisms that can speed it up over SATA, but that’s where the “how do you plan on using this” comes in.
My SOHO has Cat5e cable, which I know from testing can tranfer more than 5Gb/s in the very short runs that I use. So, on the LAN, 6Gb/s might be a practical limit, but wouldn’t the performance within the NAS be better with the 12Gb/s SAS/HBAs on PCIex8 slots, as compared to 6Gb/s SATA? For replicating a pool, or resilvering or just working as a NAS? I don’t know why I try, I’m so out of my depth!!!
Cat5E can do multi-gigabit. Anything over that, you’re setting yourself up for serious headaches. I do networking for a living; just bite the bullet and get fiber or Cat6/Cat6A if you want 10Gb speeds. Just because you might get Cat5E to connect at 10Gb doesn’t mean it’s going to run that, or even much over 1Gb. It just means the two NICs were able to negotiate with each other that yes, they’re both capable of running 10Gbps.
Quick primer on network cabling: the Category determines the twisting of the individual wires in the cable, the gauge of the copper, and whether shielding/grounding is required. Also, what it runs at minimum. Like all things, cables degrade over time. You might get a Cat5E run that can handle 10Gbps all day for a year, then suddenly you start noticing connection issues, lagginess, corrupted transfers, weird oddities. That’s because the electrical signal and environment the cable is in has degraded the cable quality out-of-spec for the speed you’re asking.
Again, a specific use case scenario helps a lot. How you plan on setting up your drives helps a lot. For example, I run 6 mirrored VDEVs in a stripe. That’s going to perform differently than your RAIDZ VDEVs. The RAIDZ VDEVs have to have parity calculated (IIRC), whereas the mirrored VDEVs just run flat out with read/write.
Also, while a single SATA vs SAS drive will perform similarly, it’s when you start getting into bundling drives together where you’ll start hitting bottlenecks and need to worry about NIC speed vs HBA speed. I remember reading somewhere RAIDZ performs at the speed of a single drive; so your 12Gb SAS drive will compare to a 6Gb SATA drive if you RAIDZ your drives (except in certain circumstances due to the full-duplex/queueing abilities of SAS). However, when you mirror them in let’s say 4 mirrored VDEVs, then you’re going to see the performance of 4 drives. This is one reason why if you read jgreco’s “block storage” guide, the RPM and type (SATA vs SAS) aren’t one of the initial considerations.
jgreco also has a guide on being “SAS”-y. Good read.
Kinda make sense?