Advice Needed: 60TB NAS with 10GbE and Plex (5–6 Users, Multiple VMs), AIO or separate server?

Hi everyone,

I’m planning to build a NAS to migrate all my current Dropbox storage back home.
I’d also like to run a Plex server capable of handling a few 1080p transcodes for around 5–6 users.

I’m hesitating between two approaches:

  • Option A: Build an all-in-one system powerful enough to host both the NAS and all my services (with Proxmox and passthrough to TrueNAS or similar).

  • Option B: Build a dedicated NAS focused only on storage (less powerful and cheaper), and run all my VMs on a separate mini PC — for things like Home Assistant, Pi-hole/AdGuard, Nginx, a Minecraft/Valheim server, the *arr suite, a torrent client behind VPN, etc.

Ideally, I’d like at least 60 TB usable storage, 10 Gbps networking, and the setup should be easily upgradable in the future.

The main issue is that I’m not sure which CPU and motherboard to choose to get 10 GbE support, enough M.2 slots for potential SSD caching, and enough SATA ports — all while keeping the cost reasonable if I go with the NAS-only route.

My ideal setup would be to separate the NAS and the server, but I’m open to advice if an all-in-one build makes more sense.

Any suggestions on hardware combos or builds that would fit this kind of setup?

Thanks!

I have a large Plex server and user base, it used to be all in one NAS but it was less than ideal after many years.

Personally I’ve now split out Plex to a dedicated Mac mini M4 which was cheaper than a GPU or CPU Upgrade for the NAS and it’s been flawless at handling up to 35~ users in peak time.

Probably someone else better to chip in for a hardware list for the NAS. I’m on the other side of overkill with a Dual Xeon system and the works.

It’s highly unlikely you’ll have any use for this, given your stated use case.

This could be as few as 3-4, really, given modern HDD capacities and assuming minimal redundancy (i.e., RAIDZ1).

2 Likes

Anything that can support enough PCIe slots will be a dream all in 1 machine. I have a few systems that each share some virtualization tasks, containers, etc. + my NAS (which also has some VMs and apps).

I hate it. I wish I simply got a motherboard that could support my feature creep. I hate that I somehow made the mistake, three times, getting things that are ‘cute, compact, low power’ instead of just having 1 bloated abomination of a system.

Edit: to expand, with enough PCIe slots, all your problems are solved. Not enough sata ports? Add an HBA (which if you’re going proxmox, would be a good idea if you’re going to virtualize TrueNAS). Going to use Plex & need transcoding? Add a GPU. Want 10 gig? Toss in a NIC. etc. etc. etc.

Thanks a lot for all the insights — really appreciate it!

My ideal setup is still to keep the NAS and the server separated. If one goes down, I can still rely on the other, and honestly, I enjoy tinkering with this stuff enough that I don’t mind spending more time maintaining both :grinning_face_with_smiling_eyes:

That said, if I go with a dedicated NAS only, what would you guys recommend in terms of CPU/motherboard combo that offers enough PCIe lanes for a 10 GbE card and maybe an HBA for extra SATA ports — while staying reasonably cheap and not overly powerful (since it’ll only handle storage)?

Depends on your budget, but always good to check out used servers depending on your local market/ebay deals. This depends on if noise is a factor in your purchase.

Otherwise the build in my signature technically does have an HBA, a 10gig NIC (mildly sketchy), and a GPU for transcoding. 8 sata ports built-in, IPMI, decent core count (I guess depends on what cpu you throw in), 2 nvme slots, etc. But my love of plugging stupid crap into a motherboard made it eventually inadequate.

Edit: I’m also going to argue that you don’t need a cache. L2ARC, SVDEV, and SLOG all have their uses, but there is a solid chance it isn’t what you think it is.

Unless you have a measurable need, L2ARC & SLOG will at best be useless. SVDEV is the exception, but comes at the risk of if it fails your entire pool dies. Many people aren’t running enterprise grade equipment with proper redundancies & foresight & are sad to learn that their data is gone irreversibly or that they can’t remove the SVDEV once it is added.

Maxing out RAM capacity is generally seen as the most optimal step to improve ZFS performance (yes there are exceptions, I know).

1 Like

I’m posting this link a bit recently:

The point when you want a 10GbE NIC and multiple NVMe drives and a GPU to encode already puts your past the PCIE lane numbers of a consumer board.

Used Epyc 7xx2 CPU are pretty cheap now and come with 128 PCIe lanes, the rest is down to the correct motherboard (which isn’t as checp). Dans suggestion is cheaper but lacks PCIe lanes, but frankly the bloated abomination route is absolutely overboard for a home NAS.

Edit: Can be fun though. Might be cheaper ricing your car. LIkely safer as well.

I use an ASRock Z790 Pro RS with 4 M2 ssd slots and 8 SATA connectors with a i5-12400 CPU…

1 Like

…aren’t necessary for the NAS-only route.

The board I linked has dual onboard Intel 10 GbE, and a PCIe 3.0 x16 slot. That slot can bifurcate to x8x8, x8x4x4, or x4x4x4x4, if desired. That can give you four NVMes (in addition to the one onboard), two NVMes + a x8 GPU, etc.

2 Likes

I’d suggest an AsRock Rack E3C246D4U2-2T (from Chinese sellers on eBay) with a Core i3-9100 or Xeon E-2100/2200. Pair of bifurcated x16/x0 - x8/x8 slots… which you likely will not need since the board already has 8 SATA ports and 10GBase-T.
The only issue may be to find DDR4 ECC UDIMM at reasonable price these days.

X10SDV is also a valid option, though with less flexibility for (unspecified) future expansion.