Building my 1st NAS: Any suggestions for refinement?

I’m building a NAS for my family to store digitized family photos, videos, and automatic computer backups. This has been a project of several weeks of research, but as a first server I wanted to run my proposed build by those more experienced before I purchased everything as well as ask a few questions. I have included some rationale behind each selection, so I ask that you feel welcome to challenge such if it is faulty or suboptimal in some way.

Part Price Product Producer
Case: $205 Meshify 2 Xl B&H
CPU: $310 i7-14700 Microcenter
W680 Motherboard: $326 Pro WS W680-ACE Amazon
32Gb ECC RAM: $139 OWC Brand “Replacement RAM” Newegg
Power Supply Unit: $109 Rosewill CMG5 Newegg
Total before HDDs: $1089

Starting with the case, the Meshify 2 Xl was chosen due to its reported capacity and reported superior airflow. In storage configuration it can support 18 3.5" HDDs and 4 2.5" SSDs while the built-in fans are reportedly enough cooling for the HDDs. Please let me know if you think additional fans would be necessary or if the drive capacity would would run past what the CPU is able to support at full speed. Also, if you know of any cheaper cases that are comparably good or even better for the purposes of a NAS please note your suggestion.

The CPU selected, the 14700, was chosen due to its price being lower than 13th Gen and $40 more than a 12th Gen i7. An Intel consumer chip was selected due to its transcoding capability for Jellyfin and support for ECC RAM along with my relative unfamiliarity with server-grade hardware such as the Epyc and Xeon product line. If you would recommend using a preowned Xeon instead please point me in the right direction. I am aware of the need to immediately run a BIOS update on any new 14th Gen to prevent it from cooking itself with too much voltage. Also, would downgrading to an i5 to save $120 be advisable?

The W680 motherboard is produced from ASUS, the Pro WS W680-ACE, is effectively the cheapest W680 with adequate PCIe ports. I saw reviews complaining about the IPMI so I chose the $84 cheaper option without it. There are some Supermicro W680 chipset models on Ebay for an extra $150-$200, but I can’t tell what would make them worth the price premium besides the IPMI. The search was limited to the W680 chipset due to the support for ECC RAM being exclusive to it and it being highly advised on multiple forums. This particular board was listed as supporting 14th Gen and evidently not all do, but I would definitely listen to your advice as to the more expensive alternatives if there’s a feature or fact I’m overlooking.

The RAM was literally a matter of searching several sites for ECC UDIMM RAM by price. I am not too familiar with this manufacturer, so if you would advise to avoid it I will- even the first ECC RAM manufacturer I recognized by name as reliable was an extra $30 per 32Gb stick.

The 1000W PSU involved a similar process of sorting by price without the same research that went into the case, CPU, and Motherboard. I know little about what to look for on PSUs for a NAS build. Is it worth getting a higher efficiency rating since it will be on constantly and do I have to get a Molex adaptor? If so what would you recommend? It lists a 6+2 PIN Pcie Connector (4), a 12+4 PIN PCIe 5.0 connector (1), Sata connector (8), and 4 PIN Molex Connector (4). My understanding is that you are able to daisy chain power from a PSU to your HDD’s, but thought I’d ask if there’s anything I need to keep in mind or best practices in regards to arranging things.

Finally, for the drives themselves. I’ve found 18 Tb drives are the least expensive per Terrabyte, preowned most of all. That said, I’ve seen some people warn against the risk of reslivering on drives that large. My whole family really only needs a combined 30Tb or so of usable can’t-lose storage, so I was thinking of an initial setup of RAIDz2 using four 18Tb drives for double parity. The site I found selling refurbished drives sold SAS and NL SAS drives lower than SATA of an equivalent size. With that in mind, would you reccomend going this route as SAS drives are said to have greater endurance and have up to double the throughput in the interface spec? If I added SATA drives to the pool would it then negate the speed advantage of SAS? If so, would you recommend making a separate pool on the NAS for the built-in SATA connectors on the Motherboard and using the SAS via HBA card? Also, are there any particular HBA cards or splitters to avoid or other things to watch out for?

Also, the W880 chipset motherboards for the Intel Ultra 2 series CPUs are supposed to release this quarter that support ECC on the new generation of processors. Is there a compelling reason besides nice to haves like reduced power consumption, AV1 support in the integrated GPU, Thunderbolt as standard, and possible Wifi7 inclusion (which likely wouldn’t even be used) to wait rather than acting on the proposed build above?

Additionally, and this is likely putting the cart before the horse, is the best way to backup only important family memories and files making a seperate pool just for that and then using Syncthing on another NAS via OpenVPN, using the replication function, or something else entirely? What’s the best way to auto-deduplicate family videos we may all have copies of even if the names might have been renamed by some of us before they all get ingested into storage?

I appreciate you going through this lengthy post and any guidance you are able to offer. Research only goes so far before questions from a lack of experience pop up. Thank you again and Happy New Year!

Before talking about your proposed config, IMO you should also look at whether you will want to use this as a media server running Plex or Jellyfin as this is a great family app.

This is massively over configured for your requirement for basic NAS. If you are not running any VMs and not running any significant apps, then an i3 would be more than enough. (If you decide to run Jellyfin/Plex and do CPU transcoding rather than GPU transcoding, then you might want something more powerful.) The case and PSU are also way bigger than you need - though if you are keeping the big case and plan to fill out all the drive slots then you probably should have a big PSU too. You don’t say what your LAN speed is, but you could also probably get pretty good performance with only 16GB of memory, however memory is pretty cheap so I would stick with 32GB.

(For comparison, my 2-core Celeron with 10GB of RAM runs Unifi and Plex (which take c. 1.5GB) and I still get great NAS performance.)

Separate pools - No - you want one pool for all your HDDs, and if you want to run apps like Syncthing, then you could probably do with a separate SSD pool to hold your apps. Within the pools, you should create a separate dataset for each class of data i.e. a syncthing dataset.

Thank you for your response.

Yes, I do plan to digitize and make a personal backup of our family’s home media before they get disk rot and for access via Jellyfin. Intel has the reputation for the best transcoding without using an external GPU or I would have gone AMD due to its easier accessibility to PCIe lanes and much more straightforward ECC memory support. CPU and iGPU transcoding should be plenty for just family use. Intel only supports ECC on 12th Gen and up, and only on W680 chipset motherboards at that. The one I’m eyeing, due to price, only has 2x 1.5Gb Ethernet ports but an expansion card can take care of that for $80 and still be cheaper than the next tier of W680 Motherboard. Unfortunately there’s only so many PCIe expansion slots and using one for a 10Gbe card does make it more tricky to use HBAs to fill every slot in the case.

My understanding is that SAS drives cannot work with the included four SATA specific ports, so wouldn’t using them and adding them to the HDD pool necessarily limit the SAS drives to SATA speeds (6Gb/s maximum, but realistically slower and unidirectional compared to bidirectional and up to 2x speed for SAS)? I haven’t worked with SAS before, but looking at the specs and the price of used being marginally less expensive than SATA it seemed worth inquiring as to the specifics. I could be entirely misunderstanding things, or the listed speed may just be theoretical and not matter due to neither saturating the connection due to being HDDs, or another bottleneck of some sort makes the listed speeds irrelevant somehow. Thank you again for your insight.

How many drives do you actually plan for?

So save the SATA ports for 2.5" SSDs and use only HBAs and SAS drives for HDDs - indeed use the first SATA port / 2.5" bay for a boot SSD to save the faster M.2 slots for something more important.

1 Like

up to 9th gen.: Core i3 and Xeon E-2100/2200 on C2xx chipsets
10-11th gen.: Xeon E-2300 on C25x or W-1200/1300 on W480/580; no Core
from 12th gen.: Core i5 and higher on W680 (no i3) or Xeon E-2400 on C26x (no iGPU!)

x8 for HBA (9305-16i/24i if need be, or just any 9207/9300-8i and an expander)
x8 for 10G NIC
Done with the 16 lanes of a consumer-style CPU…
If you need more and don’t worry about higher idle power, just any second-hand Xeon Scalable or EPYC motherboard will do.
For low idle power, our perenial friend Xeon D-1500 (possibly a X10SDV-nC-7TPnF or D1541D4U-2T8R/2O8R with on-board HBA) and an Arc dGPU.

“Newest and greatest” comes at a cost, is by definition not field-tested, and may well lack kernel support in TrueNAS (remember: it’s a LTS kernel in there). Do not hold.
WiFi is totally useless (no drivers) and Thunderbolt is officially NOT supported by TrueNAS (it may work, or not, it may break and in any case that is not a bug).

1 Like

That is the crux of the matter, isn’t it?

I read (in a 2015 forum post) that RAIDz2’s recommended configuration is 4, 6, 10, or 18 drives, so the case would support the inevitable data inflation once everyone in the family starts getting automatic backups along with the eventual process of backing up the tubs of DVDs my father has in the basement from clearance sales without requiring a resilvering to change configuration to what the case can fit if I ever exceeded the recommended 10 drive RAIDZ2 configuration. My understanding is that you can add to a pool without issue if the RAIDZ configuration and drive sizes used remain the same via making a vdev. Please correct me if I misinterpreted the recommendation I read and based planning around, though.

Originally I had planned to start with 4 18Tb drives in Raidz2, as those are currently the cheapest per Tb at $210 per disk (refurbished) and would be enough for current data with some headroom, but given the apparent complexity in adding drives it currently seems like 6 initial HDDs, not counting the boot SSD, would be more appropriate to get to that recommended configuration of 18 drives via two future 6-drive vdevs. I thought it best to ask before I purchased to make sure I didn’t misinterpret how things work from what I read. If I am off-base, I would likely pick a less expensive case as the potential capacity of 18 HDDs plus 4 SSDs does seem like overkill for future expansion at present. The interim before expanding would at least allow for good airflow on the HDDs.

I have read recommendations to use smaller HDDs, though, to avoid a days-long resilvering process after a disk failure. If you believe 18Tb is too large for safely rebuilding in the event of failure I would likely retain the case to accommodate the greater number of drives smaller capacities would entail. Is there a particular size you would recommend as 18Tb was originally planned for due to its rather favorable preowned $/Tb at present. I read one person claim that 8Tb drives were the “sweet spot”, but the cost premium and potential lost capacity in a build makes me want to run that by a second opinion and confirm if my original 18Tb HDD plan is relatively solid without major risks if using RAIDz2. I’m reasonably sure it’s fine, but thought to confirm.

I did have another pool structure question; I’ve read posts of people adding SSDs to their pool for a hybrid setup. I took this to mean more than L2ARC or a SLOG. Wouldn’t this limit the size of the HDDs to that of the SSD as I read that drives would be limited to the lowest capacity in the pool or is there a way around that and what would be the benefit as my understanding is that speed is limited to that of the slowest drive in the pool?

Also, is it worth setting up a L2ARC and SLOG on a M2, is a SATA SSD fine, and would they even provide a tangible benefit for this use case?

The motherboard comes with 3 M.2 slots, so if all used I believe they would start sharing PCIe lanes. Would that cause any potential issues? The CPU lists having 16x Gen5 PCIe lanes and 4x Gen4 ones, so would using them cut into the speed of the HBA card in an appreciable manner if lanes start getting shared? Thank you for your clarification

For context, one review on the motherboard’s Amazon page has someone complaining about being able to plug in “- 1 NVIDIA 3060, - 4 NVME Samsung 990 Pros (1 via the IPMI 3.0 x1 slot), - 6 SATA 870 EVO ssds,- 12 SATA 16TB Exos hdds via an LSI HBA… and it runs into [periodic] kernel panics due to using too many PCIe lanes”. Needless to say, I wish to avoid a similar situation and want to ask how much things can be pushed safely as I read that due to the lanes not always being fully utilized there is some leeway on lane sharing and management. The PCIe limitation is part of why I did not include a GPU on the spec list when also wanting a large HBA card and Network Card, besides the iGPU and transcoding reportedly being decent enough. I can better appreciate why so many people look at the Xeon and Epyc platforms after your clarification as to how quickly the PCIe lanes go.

I apologize if I overshared with the stated rationale behind the choices, but I would rather any potential bad assumptions or dated ideas get pointed out as it may help future people searching forums for first build guidance as I did.

Your choice of case and PSU depends on it, so it’s quite critical…

There’s no magic number, and no need to stick to even numbers (or 2^n+p, or any other formula). It is, however, recommended not to go beyond 10-12 drives per vdev, so 18 is wider that most would be comfortable with.

That’s correct, and still one of the preferred ways to expand a pool (the other being replacing drives in a vdev with larger ones). But you now have the option to use raidz expansion to widen raidz2 vdevs (with caveats about space reporting…), so it would be possible to begin with 4-wide, evolve to 6-wide, and then add another vdev. If can, though, 6-wide and then add a complete second 6-wide vdev is better. (Replace “6” by other sizes as you want…)

Fair point, but less derives means less noise, less power, less cabling issues…
Provided suficient redundancy, I’d go for BIG drives and less drives any day.

SLOG is ONLY for sync writes. It has no use for streaming media.
A L2ARC could be of use to speed up browsing, but you should increase RAM first.

No. One is served from CPU lanes, the other are from the W680 chipset (and share the DMI link to the CPU with all other chipser I/O). Look for the block diagram.

2 Likes
  1. This is true up to a point - if you have 10Gb network, then you may need sufficient drives to be able to read and write at full speed.

  2. I would say that the strategy has changed entirely with RAIDZ expansion. I have a 5-bay NAS, so since you need at least 3 drives for a meaningful RAIDZ1, then before RAIDZ expansion was announced it only made sense to start with 5 drives and pick the size of those drives to match your space needs. Now that we have RAIDZ expansion, I agree with @etorix that you start with a 4x RAIDZ2 and pick drives that give you the space you need, allowing you to expand as your needs increase. So if I was starting again now, instead of 5x 4TB RAIDZ1 for 16TB useable, I would probably do 4x 8TB RAIDZ2 for the same useable space, with a 50% space increase available if I need it.

Thank you for pointing out that feature. I had entirely missed the news about that finally making it into the full, non-nightly release barely over two months ago. Reading older forum posts about optimal configuration strategies created a small blind spot.

With that in mind, is it advisable to start with four drives and then add piecemeal as things fill up until it hits ten drives and then start a new vdev and repeat? I read that there is a rebalancing script that has to be run in order to not lose usable capacity due to old parity data. What other pitfalls or caveats of this approach should I keep in mind? It definitely seems like a convenient feature for those starting out.

Also, based on your comment about PCIe lanes I found what as far as I can tell is the only w680 chipset Motherboard with built-in 10Gbe, the W680D4U-2L2T/G5). I checked and the 10Gbe chip used in the RJ45 is officially supported by TrueNAS. Is it considered good practice to get a motherboard with 10Gbe built-in to save expansion slots? If so, I found one used on ebay for $470- would that be safe to use as I’ve read motherboards generally fail before CPUs. I found minimal reviews about it, including one claiming that the IPMI consistently misreported fan speed but there doesn’t seem to be any major red flags aside from that.

I’m currently leaning towards getting the originally selected motherboard due to it being new and $144 cheaper than the pre-owned, but thought I’d ask.

My last series of questions is regarding the CPU choice. I read a recent forum post on a third party site that claims that within the last generation AMD Ryzen CPUs had superior quality transcoding to Intel’s quicksync while being only marginally slower. Is this accurate and would their Epyc platform have the same status? Many of the forum posts I read while researching before making the build list were 2-4 years old and seemed to point to Intel as preferable, so did I somehow miss something major? The w680 chipset is a few years old and the motherboards are comparably priced or notably more expensive than AMD’s that support ECC. If what was said about superior transcoding and CPU performance for AMD while also being less expensive to get something with ECC is accurate should I rethink my plan? Also, what site do you use for benchmarks?

The case will be restocked in a week from its cheapest source (B&H), so I figure I have until then to lock-in the build. I do apologize for the slight indecision and thank you for your great insight.

Is that supported in TrueNAS? I don’t recall seeing many (if any?) posts reporting use of AMD CPU features to HW transcode. Maybe I’m blind to it.

In fact, it looks like Plex doesn’t support using an AMD iGPU to transcode, no mention of it on their site at least:

I was quite surprised by the comment as well as I hadn’t picked up anything like that in my admittedly dated reading. I’d link the post, but links seem to be disabled so I’ll just quote the guy. He had 29.5k posts there, so I can’t disregard the claim without additional research or asking for clarification from those more experienced.

"

jaslion said:

Nah this is super easily done on a gigabit line.

IPMI is nice to have but also not needed. Ecc same keep in mind that ddr5 has taken some traits from ecc memory to be even harder to have a bitflip happen. Basically dont worry about it like at all. The chance of the computer spontaniously exploding for no reason at all is higher.

Whats the need for transcoding? Are you going to make it be a workstation too? Keep in mind that quicksync transcoding is NOT GOOD. It’s VERY fast BUT the quality of the transcode is NOTICABLY lower. It’s where amds 8000g series shines atm as it’s FAST and full quality able. But please explain this usecase more.

Overall all your needs can be met with a nice b760 for like 120$. Solid, stable, quality and no unnecessary stuff.
[image] ​

7 hours ago, jaslion said:

back then only the g series had a igpu now all have it. The g series just has a way better one.
[image]

​"

​You do bring up a good point about TrueNAS and Plex/Jellyfin support, though. I’m strongly leaning towards sticking with my build unless it turns out that he’s right as I hadn’t read anything about AMD iGPU transcoding being better than Intel before his comment. Thank you again for your point about support.

One of ZFS’s best features is Snapshots. Schedule regular snapshots of your critical data (i.e. the “family media” datasets), and Replicate them to a “remote” TrueNAS server. Remote can mean on the next shelf in the basement, or offsite like a 2nd home (i.e. w/ VPN tunnels).

More here: Creating Snapshots | TrueNAS Documentation Hub

1 Like

Is there a particular reason to use that over PCIe Gen 4 HBA such as the 9500-16? Based on a Google search PCIe Gen 3 won’t hit any bottlenecks until you hit over 15 drives going all out at once, but thought I’d ask if getting a “future expansion proof” i.e. Gen 4 HBA card would be worth looking at or if it would be entirely pointless due to it handling primarily “spinning rust” and possibly up to 4 SATA SSDs.

Also, if I purchase pre-owned SAS drives can they be connected directly to the HBA or is something like a backplane strongly advised as I’ve seen said on a separate forum to someone who ended up going for pre-owned Sata drives as a result. Is there a preferred source of connector cables for power/ data for SAS drives or any in particular to avoid? I read about some issues with using Molex to power Sata drives and melted cables.

I do apologize for so many hardware questions. I’ve nearly finished getting everything squared away and have already started purchasing some components. I just have to finalize the HBA card, PSU, and drive cables at this point to make sure the drives can be fully covered without any potential issues. To that end, is there a preferred PSU for supporting vast quantities of HDDs as opposed to one designed for a few very high draw components?

I did read the article about how to flash the HBA firmware to IT mode, at least so once I finalize that selection it should be taken care of in short order.

Thank you again for your guidance! I appreciate the time taken to help someone new to the community.

The tried-and-true driver stack of the 9300 series vs. the rebuilt stack of the 9500.
And general contempt for Tri-Mode.

“Future-proof” is pure NVMe, not SAS, no Tri-Mode.

Whatever works. An active backplane with expander allows to more drives form a basic -8i HBA, but breakout cables work just as well.

1 Like

I read something about incompatibilities with some management tools due to the abstraction layer and it sometimes reporting NVMe and SATA drives as SCSI to the system. Is that the issue you were referring to or is it something else?

Also, and this may be a slightly ignorant question, but although the LSI HBA cards I looked at require 3x8 lanes would it be a good idea or an issue if, instead of putting a 9305 -16i into a 5x16 slot I put a 9305 8i into each PCIE 3x4 slot in order to keep the remaining PCIe 5 slot usable and also prevent data saturation of the 3x4 lane? My understanding is that those are provided by the Motherboard and would leave the CPU’s 20 PCIe lanes untouched. What problems might arise from this approach?

Thank you for humoring these questions, even if they’re nonessential.

Tri-Mode is incompatible with U.2 drives, and requires U.3. Plain PCIe can use either.
And Tri-Mode drops the whole idea of “direct access” behind “NVMe” to run drives through a SCSI stack, putting a hard cap on maximal performance, all so that Broadcom can keep its very profitable business of putting a HBA in about any rack server…
Aren’t these enough reasons to skip Tri-Mode?

-8i would be a 9300, not a 9305. If the HBA is to serve more than a handful of drives, you want to give its full 8 lanes. And if you’re chasing “bottlenecks”, put the HBA on CPU lanes, not on chipset lanes: The DMI link between CPU and PCH is a bottleneck!

It took me a while to understand you mean PCIe 5.0x16…
PCIe 5.0 exists to serve 400 and 800 Gb/s networking in the AI data centre. You have no use for PCIe 5.0 in a home server and/or with a consumer-grade CPU which provides a mere 20 lanes. Your motherboard at least offers to bifurcate the 16 “GPU lanes” into two x8 slots: Use that! One x8 slot from CPU for the HBA, another x8 slot for a 10/25 Gb/s server NIC.
And don’t bother that these are PCIe 3.0 cards going into PCIe 5.0 slots.