Building first NAS - what to do with PCIe?

So I had a look at Joe’s guide for asking for build help but I think this is worth asking.

The context (hardware):
Rather than layout a set of requirements and choose hardware to complement it, I’m starting with some old hardware I’d like to turn into an ITX form factor home NAS box. I have a Ryzen 9 5900X, and an ASRock B550M-ITX/ac, and I’ve got 2× 16GB of unbuffered ECC coming in the mail.

The context (applications):
My intentions with this hardware are fairly loose, for now I want to see what this machine is capable of.
I’d like to store videos in a file server (personal holidays, etc.).
I’d like to host a modded Minecraft server or two (maximum 10 people).
If possible, I’d like to have a VM that I can remote into, maybe just a desktop with personal documents (no gaming) - but I’ve probably misunderstood this application.

The context (other):
I have successfully installed ElectricEel-24.10.1 SCALE, and have even discovered that my motherboard can POST and boot to TrueNAS in headless mode. This was an unexpected but welcome surprise.

Question(s):
While I’d originally planned to use just the 4x onboard SATA and the m.2 slot, do I even need a video card to transcode anything at this point? I know my CPU does not have built-in graphics but it sounds like my CPU is powerful enough to brute force transcode videos by itself (e.g. if I was running Plex), or am I missing something here?

I have a spare GTX 750 1GB which I had been using in the motherboards single PCIe slot, but I’m tempted now to use a 4 drive NVME to PCIe adapter to add a bunch of storage to my system if the CPU can brute force it just fine. Am I shooting my future hardware-debugging-self if I max out my storage using an adapter?

Or maybe I’ve got this all wrong and the real solution is to buy a 10GiB networking adapter - so I can put a GPU in a different gaming machine and leave this NAS box as just a doc/photo/video files and Minecraft server with a much better network solution.

Thanks in advance for your help and suggestions :smiley:

My main server is running a Intel Xeon E3 1241 v3, its pretty old, but has no problems transcoding with plex. All of its PCIE slots are filled up with HBA cards.

1 Like

If you have a switch/firewall that can support 10g connections, go for it, 10g is better than 1g :grin:

1 Like

VMs are usually headless Unix applications. If you want a VM of a dektop OS, you’ll need a GPU to pass through to the guest OS, either a dGPU or swap the CPU to an APU. (And hope that IOMMU grouping is done right on this consumer board…)

You have not specified any transcoding requirement in your stated use case. You decide on this one.

No. You just have to decide what to do with your single slot:

  • 4 M.2
  • 2 M.2 + half-height dGPU/NIC
  • full-height GPU

Probably this, given that the on-board NIC is Realtek.

1 Like

Thank you everyone for the replies.

Yes, I have heard many people trash on the realtek NIC and advocate for intel. I think I will drop the desktop VM idea that requires a GPU, and instead find a better networking solution

My CPU (Xeon 4210T) is capable to brute force a 4k stream. Ryzen 9 5900X is quite beefy, I’d imagine it should be able to do the same.

Honestly, I’d use that GPU to passthrough to your workstation desktop VM instead for a few reasons. Working in a VM without GPU tends to suck. It usually feels quite sluggish. Also, transcoded videos tends to have worse quality than just brute force unless you have a fairly recent Intel iGPU with recent quicksync support.

10G NIC sure would be nice, but unless your workload includes constantly transferring gigs of data for hours, 1G is more than enough for the vast majority of people.

1 Like

Thanks for the additional advice! I’ve been thinking about it and yeah, it doesn’t make sense to go hard on the networking if I don’t know that I’ll have all the networking infrastructure to exploit it.

I think based on everyone’s feedback, my conclusions are:

  • I am unlikely to exceed the spec of the onboard 1GbE, and I won’t know if a NIC will benefit me with features like TCP offloading until I actually use my NAS and see what use cases of mine are being bottlenecked.
  • More storage slots would be very nice, but I may find this setup limiting from a functionality standpoint if in future I need a GPU to debug the setup or even run a desktop VM

I also discovered that inline PCIe splitters are a thing (hinted at by etorix earlier, thank you!!) and that my motherboard supports x8/x8 PCIe risers (B550M-ITXac.pdf, p.3 Vermeer/Matisse CPUs). I think the most compelling course of action for me now is to buy a PCIe lane splitter.

The splitter I am looking at is PCIe4 x16 to x8/x4/x4, and the x8 is stacked inline with the x16 in a way that makes me think I can sneak a half-height GPU and NVMe storage options in at the same time - best of both worlds potentially (again thanks etorix).

Something like that: :wink: