Advise on system architecture

Hi all. I just came to truenas from trying to setup a nas for a small non-profit organization. The easy way seems to be a Synology DS224+ that would probably just work but then I started looking at alternatives because of its limitations (resources, expandability, etc.). Also, I am running two small aws intances that I am considering self-hosting for which the resources offered by synology seem somewhat short even if I expand the ram to 6gb.

So, my use case would be to have about 6tb for smb, probably in raidz2, host a wordpress site (on a litespeed server) and a Django app (with Nginx as reverse proxy), both with very little traffic (I am currently on aws t3.micro instances (2 threads and 1gb ram) for each and both are quite fast and responsive).

From the research I could do so far, I came up with two possible builds that seem to tick all the boxes (ecc, nics, etc.) and set me back about the same as the Synology system. These are:

Build 1:

Supermicro X11SRM-VF C422
Intel W-2135
Micro-ATX Fractal Design Node 804
Kingston Memory 16GB DDR4 2666MT/s DIMM Reg ECC Module KTL-TS426/16G

Build 2:

Micro-ATX Fractal Design Node 804
AsRock B550M-HDV
AMD Ryzen 5 Pro 5650G
Mushkin DDR4 - 16 GB - 3200-CL22 - Single Proline ECC (MPL4E320NF16G18)

For the disks I am considering ironwolf for mass storage, two small ssd to run the apps and an even smaller m.2 for arc cache.

I would appreciate any comments on the system architecture for what I am trying to achieve since I am a bit concerned with what I could be getting into, especially with what I don’t know I don’t know. I have some experience with Linux (mostly Debian based, little Arch) but FreeBSD is completely new to me. I am familiar with the complexities of deploying apps but I am not into containers and I prefer the command line and I still haven’t made up my mind between going core or scale. Deploying the apps on bsd must have their own complexities that I am not aware. Well, even the question about aged versus new hardware, would the older hardware still pay for itself (considering aws rents, say 100 a year, or 7+ years with the investment I am looking at) and then some. Do you see any minefield on the way to what I am trying to do?

Looking forward to hearing from you! Thanks.

1 Like

I would double the RAM and go with the config that costs you less, so I think build 2.

L2ARC should only be considered with at least 64GB of RAM.

I would suggest running your website on your SSD mirror (which will also be your jail/app pool); about the drives you could do a 3-way mirror of 6TB drives if you want the two drives of parity… and it would be cheaper than a 4-drive RAIDZ2; if you don’t want the 2-drive parity, go with a simple 2-way mirror.

Actually for the motherboard of Build 2 I would suggest the MC12-LE0 (rev. 1.x) | Workstation Motherboard - GIGABYTE Italy that was recently reccomended to me by @DigitalMinimalist.

Overall, there are no major issues with your choices beyond the L2ARC one.

1 Like

I was going to say the same thing about the L2ARC being a bad decision.

If you follow the ixSystems recommendation for mirrored boot pool on separate disks from anything else you will need 4x SSDs - 2 for boot pool and 2 for app pool. But it is easy to edit the install file early in the install and limit the boot pool to 16/24/32 GB, and this leaves the rest of the SSD disks for use as App pool, and this works just fine even if it isn’t a supported configuration.

I have a 10GB system and even with a few apps, I still get 99.9%+ cache hits. So I am not sure how much an extra 16GB will give you (if anything).

Re disks, if you have dual M.2 slots, then use these for the boot pool and have separate SSDs for apps. If not, then use 2x SATA slots for SSD and use remaining SATA slots for a RAIDZ HDD pool, reserving 1 or 2 SATA slots for the redundant disks, and dividing the mass storage disk space you need between the remaining slots to determine the physical HDD size you need to buy.

For example, if your mother board has 1x M.2 slot and 6 SATA slots, and you need to store 8TB of mass storage data (bearing in mind that this needs to account for estimating inaccuracies, growth, compression, snapshots and replication copies, free-space), and you decide that you can live with a single HDD failure, then:

  1. You only have 1x m2 slot, so you shouldn’t use this for boot and / or SSD.

  2. You therefore need 2x SATA slots for mirrored boot / apps SSDs, leaving 4x SATA slots for HDDs.

  3. You need 1x SATA slot for redundancy, leaving 3 SATA slots for disks.

  4. 8TB / 3 = c. 3TB.

So I would buy 2x SSDs for mirrored boot/app pools, 4x 3TB or 4TB HDDs to use as mass storage in a RAIDZ1 pool.

But this would be my personal approach - and I am not a TrueNAS / ZFS expert, and others way have different opinions.

1 Like

Do not do this!
Just use a single boot drive, it’s quick to replace and with minimal issues if you have a backup config; if it will be not easily serviceable, read Highly Available Boot Pool Strategy | TrueNAS Community.


Yes - even with 1 M.2 slots you could buy 2 small 32GB SSDs and use one and keep the 2nd in a drawer in case the first one fails.

It won’t be redundant but in the event of the M2 SSD failing, you will have an immediately available replacement.

As @Davvo says, a mirrored boot drive may not actually be used for boot, and in the event of a data corruption on both copies, you may need to rebuild the system anyway.

1 Like

If this involves exposing the NAS to the Internet to serve content, you’re looking at massive pain in the security department…

Otherwise, the X11SRM-VF is a much better build, being a genuine server motherboard with IPMI, 48 exposed CPU PCIe lanes and support for high amont of RDIMM RAM—massive overkill for your needs.


Reverse proxy and the pia is reduced, or so I have heard.

1 Like

The Xeon W is an odd choice. At six cores, you should be looking at Xeon E platforms, with motherboards such as the X11SCH-F, X12STH-F or X13SH-F (or similar). Cheaper, lower-power, similar I/O, same performance envelope.


There’s a US seller on eBay selling out the X11SRM-VF alone or in bundle with matching W-2135/2145 for very good prices. High clock for SMB duties, more RAM capacity and much more PCIe lanes than a Xeon E so it’s not an absurd choice over a new X12STH at thrice the cost with CPU! Just massive overkill :stuck_out_tongue:

(But don’t forget to add a cooler for LGA2066 Narrow-ILM.)


Yeah the narrow ILM is a very good point. I got bitten by that and had some new brackets lasered so I could still use the 120mm cooler/fan I was planning on using (might post on homelab).

As far as the OPs system recommendations go, I’d def go for the Supermicro build, but stick a Xeon E in there as others suggested. I would now always go with stripes of mirrors, the rebuild time are a huge advantage, as is the ease of expansion (just add another 2 drives, but do this early, well before you hit 80% fill level). In either case I would not bother with L2ARC, just get more RAM. Nor would I consider splitting pools over SSD and rust - just commit properly and put everything on flash. This will reduce your complexity, which you’ll appreciate when you’re offsite and the house comes falling down. Honestly I think if you really have low traffix than all rust would be fine also… You’re not going to notice the speed difference at the end user.

I see no mention of network bandwidth requirements, nor if you’re thinking of 10GbE or 1GbE hardware… This is an important consideration and will have repurcussions on disk hardware. If you’re not running 10GbE then SSD is wasted money IMO.



Which country are you located?

Thank you all for your inputs. I believe I have my pointers as to hardware. @etorix yes, that’s the seller I got my prices from…

As @etorix pointed out, my main concern now is networking since the whole point of going with a possibly massive overkill :slightly_smiling_face: would be to be able to serve the two apps (wordpress site and Django app, both exposed to the internet) from the nas and thus exposing it to the internet.

I am considering to bridge the isp router to a opnsense box but I still do not have a clear idea on how to approach the problem. Moreover, I can do some subneting and port forwarding but my networking skills are somewhat limited as aws takes care of most of that for me. At this point I am not even sure it can be done securily.

@dan I see you on a opnsense thread, do you happen to have any contribution to this discussion?

Looking forward to hearing from you and thank you all again for your inputs. I have learned more from this discussion than from a week of research.


Replacing the dodgy ISP routers is very feasible, unless you’re stuck with coax and DOCSIS. The IPTV component is not difficult to get working, the only tricky hurdle is getting a fiber modem authenticated if you do not already have a separate modem (you’d need one that works with the ISP’s equipment and the password, which you can obtain from most technicians the next time a fiber cable mysteriously stops working - a small token of your apreciation for the services rendered goes a long way).

For Vodafone Portugal, there were some write-ups on a local forum. Haven’t been there in a while because I strongly dislike their approach to moderation. Don’t know the details for other ISPs.


Then a MC12-LE0 with an ECC-capable Ryzen 3000/5000 (no need for an APU, as there’s IPMI for setting up) will come even cheaper than the X11SRM while still sufficient for your stated needs. It only lacks enough SATA ports for all bays in the Node 804, but if you only needs “about 6 TB” you’re not going to fill the case.

1 Like

In my case–and I’m on “business-class” Internet at home, and in .us, so things could very well be different–I was able to have my ISP put their modem into “bridge mode,” passing everything through to my OPNsense box (and to my pfSense box before that). From there, I open ports 80 and 443 on the OPNsense box, and install Caddy to act as a reverse proxy to those of my internal services I want to expose to the Internet: Bitwarden, Ombi, Minio, and Wiki.js. The configuration is simple enough[1].

The problem comes, potentially, with your ISP. If they block ports, as many residential ISPs do in .us, that could be a problem, especially if one of those is port 80 and they won’t unblock it. If they use CGNAT, that will be a big problem.

  1. See, e.g., the OPNsense docs for Caddy ↩︎


:scream: Here in the Netherlands, not only does my ISP let customers open ports as they wish, it allows them to install their own cable/fibre modem!

1 Like

The latter is pretty common here as well. The former, well, it depends on the ISP and the port in question. It seems to be the case that most residential Internet here blocks port 80 inbound, though not port 443.

Located in Portugal too :grin:
I run OPNsense behind a 4G Router in Bridge Mode with a Vodafone Portugal prepaid SIM card with unlimited data - no landline here…

For exposed Services to the Internet: I use Cloudflare Tunnels


@dan thanks, that seems very feasible. isp here do not get in the way. I am already bridging two routers from two different service providers with no issues.

@DigitalMinimalist thank you fellow countryman :slightly_smiling_face: I am using cloudflare already for both apps and doing the tunnels seems like a good idea. I take it ports 80 and 443 are no different than any other port?

what about the whole core/scale thing? i would be more confortable using linux vm’s for the apps… any inputs on this topic? thanks