NAS Hardware for a new build, Advice?

That board supports SATA DOM (per Asrock: SATA_0 supports SATA DOM). When I build mine, I put a Supermicro SSD-DM032-SMCMVN1 SATA DOM drive in for the OS since I didn’t want to waste an M.2 or a drive bay on it. At 32GB, it’s plenty large enough and certainly fast enough for TrueNAS. It has been rock solid since I built the system 3 years ago.

Since TrueNAS is really easy to recover if you keep a backup copy of the config (assuming you at least take a copy when running OS updates), you don’t need anything remotely as complicated as a mirrored RAID to run the OS.

My use is for three SOHO activities. I’m an architect, so I use CAD (2D mostly, some rendering/graphic design) at desktops/workstations in the LAN. These are hypervisors actually, fedora / kvm. I run communications for a local political campaign, that’s sometimes graphics intensive, but mostly just something between home and a small secondary school classroom (not the media lab, though).

Are you connecting to SMB shares and just accessing files that way? Is this going to be block storage and you’re doing iSCSI/NFS for VMs?

Also, I don’t understand the comment about performance of SATA vs SAS. SATA III has a 6Gb/s rate, while the SAS HBA, and the SAS HDDs both have 12Gb/s rates. The HBAs are PCIe x8, and the PCIe slots on the board are PCIe x8 vIII, so I don’t think that’s a bottle neck?

Basic gist: SAS is full duplex (can speak and listen at the same time for data) and SATA is half duplex (can only speak or listen). SAS includes some queueing mechanisms that can speed it up over SATA, but that’s where the “how do you plan on using this” comes in.

My SOHO has Cat5e cable, which I know from testing can tranfer more than 5Gb/s in the very short runs that I use. So, on the LAN, 6Gb/s might be a practical limit, but wouldn’t the performance within the NAS be better with the 12Gb/s SAS/HBAs on PCIex8 slots, as compared to 6Gb/s SATA? For replicating a pool, or resilvering or just working as a NAS? I don’t know why I try, I’m so out of my depth!!!

Cat5E can do multi-gigabit. Anything over that, you’re setting yourself up for serious headaches. I do networking for a living; just bite the bullet and get fiber or Cat6/Cat6A if you want 10Gb speeds. Just because you might get Cat5E to connect at 10Gb doesn’t mean it’s going to run that, or even much over 1Gb. It just means the two NICs were able to negotiate with each other that yes, they’re both capable of running 10Gbps.

Quick primer on network cabling: the Category determines the twisting of the individual wires in the cable, the gauge of the copper, and whether shielding/grounding is required. Also, what it runs at minimum. Like all things, cables degrade over time. You might get a Cat5E run that can handle 10Gbps all day for a year, then suddenly you start noticing connection issues, lagginess, corrupted transfers, weird oddities. That’s because the electrical signal and environment the cable is in has degraded the cable quality out-of-spec for the speed you’re asking.

Again, a specific use case scenario helps a lot. How you plan on setting up your drives helps a lot. For example, I run 6 mirrored VDEVs in a stripe. That’s going to perform differently than your RAIDZ VDEVs. The RAIDZ VDEVs have to have parity calculated (IIRC), whereas the mirrored VDEVs just run flat out with read/write.

Also, while a single SATA vs SAS drive will perform similarly, it’s when you start getting into bundling drives together where you’ll start hitting bottlenecks and need to worry about NIC speed vs HBA speed. I remember reading somewhere RAIDZ performs at the speed of a single drive; so your 12Gb SAS drive will compare to a 6Gb SATA drive if you RAIDZ your drives (except in certain circumstances due to the full-duplex/queueing abilities of SAS). However, when you mirror them in let’s say 4 mirrored VDEVs, then you’re going to see the performance of 4 drives. This is one reason why if you read jgreco’s “block storage” guide, the RPM and type (SATA vs SAS) aren’t one of the initial considerations.

jgreco also has a guide on being “SAS”-y. Good read.

Kinda make sense?

2 Likes

Do you actually think that there’s a difference between refurbished or recertified when it comes to hard drives? I find it hard to believe that anyone repairs hard drives these days, other than perhaps replacing the controller board in an attempt to rescue data. Used is used, it’s just a matter of how many hours the drives have on them, how much testing has been done by the seller (if any) and what the warranty is.

As to used vs new drives and invaluable data, any drive becomes a used drive as soon as it’s put to use. And, if I recall correctly, “new” drives have a higher failure rate than post infancy drives.

Further, you can get used enterprise class drives with 3 to 5 year warranties from established resellers. (diskprices.com tracks drive prices on Amazon and includes warranties in their list.)

The above, combined with RAIDz, and backups have always made buying new drives at several times the price of used drives seem foolish to me.

Example: I recently bought a lot of 3 x 12 TB Water Panther drives on eBay for $72 total each. At that price I knew that I was taking a risk, especially since they’re in a 4 x 12 TB RZ1 array. But I feel that the risk of used (with no warranty) and RZ1 combined is acceptable because that array is backed up to a 7 x 8 TB RZ2 array.

The backup array was built from two drives that I already had and 5 additional 8 TB drives that I also bought as a lot from a surplus dealer on eBay (“fully functional and wiped clean,” no hours on reported, but a 30 day return) for $30 each.

I’d have to have 2 of the 12 TB drives and 3 of the 8 TB drives all fail close together for me to lose my data and I’m pretty sure that that is very unlikely.

So @RickNils, no, I don’t think that you made a mistake buying used, in fact I think that you did the sensible thing. (You do have a backup plan, yes?)

In conclusion, buy used, use ZFS, back up your data, keep a spare or two on hand, save money, be happy.

I don’t know why the 12 TB drives were so inexpensive other than that they are Water Panther relabels, but they’ve been working fine for several months now. I think that the 8 TB drives were even cheaper because they were NetApp low-level formatted (which the seller did not disclose). Fortunately, I was able to figure this out and they all reformatted to normal with no problems.