Truenas hardware upgrade design help

I currently have a Truenas server, running on hardware from 2013. It serves for archive storage onto 6 2TB HDDs. I also have Jellyfin running on it.

I’m looking to build a new server built around the AMD RYZEN 7 Pro 4750GE with 64GB ECC memory. Hoping for power efficiency, but this time around I need to use it as shared storage for my 2 XCP-NG servers.

I think the iGPU should be fine for transcoding jellyfin.

I was also thinking about using a pcie card that will hold 4 m.2 SSDs for fast shared storage, and moving my 6 2TB HDDs for slow archival storage.

So I believe I will need a motherboard that supports bifurcation to do this?

I will also need another slot to put my 10GB SFP + card in.

Anyone else here doing a similar build and can offer advice?

Any help will be much appreciated!

Thank you,

Joe

That CPU only has 16 pcie lanes according to techpowerup, so even with bifurcation you can’t fit four NVMe drives and the NIC in regardless of bifurcation.

With bifurcation you might get away with a 8x4x4x adapter and a half height NIC. If you already have a full height NIC you might need to go back to the drawing board, or make a few sacrifices.

1 Like

Thank you for the reply, much appreciated.
Definitely a point to consider, and in the end I do want a system that will not only be functional, but also last a little while without the need for upgrades.
It seems I might need to adjust my balance of efficiency vs power.
My goal isn’t to create a super power efficient server that runs on 20 watts, honestly If I can keep it under 100W I’ll be happy.
I’m also treating this as a learning experience, want to make sure i’m taking all the considerations, and not just relying on outdated information
A recommendation on another forum would be to use enterprise SSD’s over desktop nvme. So far it seems like i’ve seen mixed reviews on that.

TBH I’d take any of my advice with a pinch of salt, I’m not that knowledgeable. I’d also be mindful that there are going to be motherboards that have pcie lanes on the chip set so I can easily see a scenario where you can run roughly what you want, just with limited expansion options.

Ie if you wanted to run more hard drives, you’d need a HBA and would not have the ability if trying to also run the 10Gb NIC and a bunch of NVMe drives.

OTOH, I’d ask what do you want to run the NVMe drives for? Ie what level performance? Because if you are looking at enterprise level performance it’s probably worth looking at used enterprise gear, especially if you want to pass through a decent number of cores (used Epyc processors are cheap now, motherboards, not so much). And if you want to run a bunch of VMs is TrueNAS the right hypervisor given the current upheaval in the VM backend?

Sorry, I’m probably not helping you. :grimacing:

You are helping and I truly appreciate the engagement.
My current truenas scale server is running basically Jellyfin along with the other storage stuff. I’ve run into issues with transcoding, and that box is 12 years old.
When I tried using it for shared storage, I would consistantly get errors on my VMs that storage disappeared.
I’ve recently replaced my old XCP-NG servers with 2 new servers built on the BD795 SE, so I’m not going to be running any VMs on this new truenas build. I just need it to provide reliable shared storage, and with enough speed that the vm’s feel snappy, nothing crazy.
The workloads that my VMs are creating isn’t very high either. So the 10gb link should be good enough for now, but I would like some head room, so if I decide to go 25gb at some point, I wont have to upgrade. Doing some work with AI/LLMs might be in the cards in the future as well. So some other things I want to take into consideration as I work though this.

Oh, that makes more sense. GPU held separately, and (mirrored?) NVMe drives on the minisforum? So it’s purely a NAS build.

So what’s the limiting factor? I think it’s the network speed on the Minisforum, so just plan to saturate that link and your good to go (given the equipment in place).

A couple of mirrored M2 drives would saturate that no trouble, but I think a couple of mirrored HDD pairs would also saturate the links depending on the task.

Intel iGPUs are better at transcoding, and AMD APUs like the 4750GE only bifurcate x8x4x4—which is enough for a NIC and a pair of M.2 for fast storage.
With these caveats, your plan is fine, but you may as well consider an Intel build (C246 + i3-9100 or Xeon E).

1 Like

So based on some recommendations here, and on the Lawrence Systems forum as well I’m thinking about going with the following setup:

ASRock Rack E3C246D4U2-2T

Intel Xeon E-2224G CPU

64GB DDR4/2666 Memory (to start, may bump up to 128GB)

1TB SATA SSD MZ-7KM9600 Samsung 2.5 Enterprise SSD

ASM1166 in the M.2

I was going to use the 6 sata ports to connect my HDDs.

Using the M.2 means I can only use 1 more onboard sata for the OS. Do you think its best to just use that one port for the OS drive and have the 6 remaining SSDs in 3 mirrored VDEVs as part of my “Fast” pool, or have 2 mirrored VDEVs as the pool, and have a mirror for the OS?

Please let me know if you have any criticisms of the build so far, any comments would be greatly appreciated!

Don’t bother with with a SATA controller on M.2: Get a cheap M.2 NVMe for boot and use all of your eight (not six) motherboard SATA ports for storage.

Now the choice of 1 TB SATA SSDs for storage (if I get it right, your list without quantities is not quite clear) is strange. You can get 1.92 and 3.84 TB enterprise drives. 1 TB is small (and feels consumerish…)

Utterly useless—unless you’re missioned by your company to build a “five nines” server, in which case you should be talking to a sale representative.
And don’t tell me you intended to use one or two of these 1 TB for boot…

1 Like

So I currently have 6 HHDs in a pool that I was thinking about moving them over to this as archival storage. Then I wanted to have a “Fast” pool that I plan to use for VM shared storage, VM Disks, NFS shares for my docker containers, that type of stuff. I was going to use the M.2 sata controller for the 6 enterprise ssd’s that will create that “fast” pool.
If I use the M.2 for boot, Then I would use an HBA or something similar for the 6 SSDs? or am I missing something.

I was missing that you have six hard drives, moving to the new NAS, and six SATA SSDs.
Then, yes, you’d want a HBA, with the added benefit of putting your “fast” pool on CPU lanes.
As you intend it, the HDDs, the ASM1166 and its SSDs, and your 10GBase-T network all hang on the C246 and its x4 uplink to the CPU while the 16 direct PCIe lanes remain unused, which doesn’t look like the best use of plateform resources.

Ideally, if you have not bought the SSDs yet, you’d do the “fast” pool on NVMe drives attached to CPU lanes. And, for the sake of power efficiency, consolidate the 6*2 TB HDDs into a lower number of bigger derives :wink:

I just found the ASRock Rack W680 WS for an amazing price, I think i’m going to pick that up and pair it with the i5-12500.