Is my hardware suitable for TrueNAS SCALE?

Hi everyone!

I’m considering implementing TrueNAS SCALE on my system, but before moving forward, I’d like to know if my hardware is suitable for running it. Below are the components I plan to use:

Processor:

  • AMD Ryzen 9 7950X3D (4.2 GHz / 5.7 GHz)

Storage (NVMe):

2 x Samsung 970 EVO Plus 250 GB PCIe NVMe M.2 in RAID 1 configuration

RAM:

4 x 32 GB ECC DDR4 RDIMM 3200MHz (total of 128GB)

Power Supply:

  • Corsair HXi Series HX1200i 1200W 80 Plus Platinum Modular

Motherboard:

  • ASUS TUF GAMING B650-PLUS

Network Card:

  • Intel X540-BT1 10Gb PCI-E

Chassis:

  • 19" 4U Rackmount case with 20 Hot Swap bays UNYKAch ATX USB 3.0

My goal is to set up a storage system with TrueNAS SCALE, primarily for storage and container virtualization. The system will be managing a large amount of data, which is why I’ve invested in higher-end hardware. Does anyone have experience using this type of hardware with TrueNAS SCALE? Do you think it’s compatible, and will the performance be adequate?

Also, I have a question about the chassis. It has a USB 3.0 connection for the front panel, and I’m concerned that this might create a bottleneck for the data transfer between the disks and TrueNAS, especially since I’m planning to use high-performance drives. Should I be worried about this? Would the USB 3.0 connection interfere with disk performance, or is there a better way to connect the storage for optimal performance?

I appreciate any advice or suggestions. Thanks!

  1. You will need a boot drive - so if the 2x250GB SSDs are not intended for this, then you will need an extra one. Boot drives only need to be mirrored if you have a high-availability requirement.

  2. Don’t use USB drives - just don’t!!! If you need storage make sure that it is SATA or SAS attached, and if necessary buy PCIe HBA card(s) to attach them.

  3. I haven’t look at the specific of your proposed MB, but Gaming MBs tend to be low on disk attachments cf. NAS specific MBs, and consumer processors (even ones intending for gaming) generally have far fewer PCIe lanes than e.g. Xeon processor intended for servers, and aside from PCIe slots for graphics cards, can also be limited on PCIe slots cf. a server MB. Assuming you want to fully populate the 20 hot-swap bays, then you will need at least 1 and possibly 2 or even 3 HBAs to attach these drives. So you will need PCIe lanes and slots.

  4. If you are just doing NAS then you need neither a massive processor nor quite so much memory. If you want to give details of any apps / virtualisation you have planned, and of the types and amounts of data, then we can probably give more detailed advice.

1 Like

Good luck with that.

2 Likes

Hi Alejandro,

Please note I’m going to point out a few things to make sure you get the best setup. There are imbalances in the components listed for use as a “normal” NAS, with any NAS software, but your specific requirements might be an edge case. There is a way to make a couple of changes to make this hardware work if you are sure of your long term requirements. There’s a lot of emphasis on pcie lanes because they are critical for expansion.

The 7950X3D is very powerful CPU but it doesn’t have many pcie lanes. Those lanes are what allow NVME drives and expansion cards to communicate with the CPU. Server grade CPUs (Xeon or Epyc) usually have many more lanes, giving you more pcie slots.

The number of slots on a motherboard can be misleading. The motherboard you listed has one 16x slot usually for a GPU and the other x16 (physical) slot only has 4 pcie lanes, that means you could install a card that expects more lanes and it could very easily be bottlenecked or not work as expected.

If you populate the third m.2 slot it will disable the second pcie slot (x16 size, 4 lanes). You will need two cards so only 2 NVME drives are possible.

The 7950X3D would usually need to be used in a NAS that’s running some heavy workloads for the cost and power usage to be optimal. I guess you chose this for a lot of CPU intensive workloads, this could work but lock you into a system that can’t take more pcie cards (apart from the x1 slots that don’t support much). You won’t be able to put in a GPU for example.

The RAM you listed is DDR4. That won’t work with the motherboard/CPU as they need DDR5.

If you need more pcie lanes, server CPUs also tend to use registered memory, that’s different to the unregistered DDR4 you listed. Workstation Xeon CPUs can use unregistered DDR4 but they usually have fewer pcie lanes than server specific CPUs. You should see if you can think of any upgrade path you want to take with the system that would require a pcie card to make sure you spec a system that can handle an upgrade. You will usually sacrifice a significant amount of CPU single core speed with server CPUs versus the 7950X3D though.

The two NVME drives don’t have much space and if you use them as a mirrored boot drive you will realistically be limiting your upgrade options. I would look at a pair of low capacity but reliable SATA SSDs for booting (attached to the motherboard sata ports). The NVME drives would be available for a small fast mirror, until replacing with larger ones maybe, they would come in handy for Docker or Virtualization.

The power supply is a higher rating than usually required but I guess it will handle the spikes in power at boot very well.

The X540 T1 (assuming it’s that as BT1 might be a typo or a mezzanine card, which mezzanine is certainly an issue) is likely to be pcie version 2. Putting that in the pcie version 4 x4 (16 physical) slot isn’t something I have tried. I’d suggest trying it, check it negotiates a 10gbe connection and see if you get acceptable performance in a test that saturates the connection (remembering that some workloads can push lots of data both ways so you might hit a bottleneck there if everything else works).

Moving on to the drives. I assume you are using the Server Rack 4U HSW4520. If so, the USB3 on the front won’t interfere with the drives in the hot swap bays. The 20 drives are connected to 5 backplanes in the case, each backplane handling 4 drives. Each backplane needs molex power and has a connector that needs to be attached to a pcie card known as a host bus adapter (HBA), this shouldn’t ideally be a RAID card for TrueNas as it does the job of raid and much more and needs direct access to the drives. A RAID card would need to be in “IT mode” for TrueNas to work. HBAs usually work with SATA and SAS drives but it’s best not to mix types on the same backplane, if you haven’t bought drives already you definitely want to read up or watch videos on TrueNas in regards to the storage options.

The HBA card you’ll need is very important, you have the x16 slot for this but it’ll need to have enough connections to address all 20 bays. These come as internal, external or a mixture. You’ll want a one that’s internal for 24 drives. It’ll have 6 ports but you’ll only need 5 cables to connect to the 5 backplanes. You should check the connections in your case (I’m only assuming it’s got 5 based on what I found). It’s also vital to ensure you know the connection type, there are lots and they have very similar names/numbers.

If you are connecting mechanical hard drives there are bargains to be found on eBay for old LSI cards. They often have 24i at the end of the card name. If you intend to hook up 20 SSDs you shouldn’t be looking at the cheaper end of used. SSDs will need a newer card for best performance. That brings up cooling… Server cards need a lot of cooling, sometimes even if they aren’t being pushed too hard. Make sure you have a lot of airflow going through the case, the drives need that too.

It’s getting late here, hopefully there’s enough information above to point you in the right direction. I’ll check back tomorrow evening to see if you have any questions.

3 Likes

Well, technichally TrueNAS will run on a 7950X3D in a gaming motherboard but it is not necessarily the most suited, as explained by @elvisimprsntr . And it’s not even “high-end”; for a storage server with current generation AMD CPU that moniker would rather point to EPYC 8004 “Siena”—which EPYC processor would actually use RDIMM.

Can you still return some this hardware? You’ll need to return either the RAM or the motherboard+CPU pairing as they are incompatible with each other. :person_facepalming:

The chassis looks like one of these “rackmount chassis for consumers who don’t know about racks”, so it’s probably a passive backplane without expander and will require a -24i HBA, or at least -16i and a reverse breakout cable to serve all 20 bays. Cooling might be dubious…

Then a better explanation of your use case and requirements (how much storage? what kind of data? what kind of virtualisation workload? some high compute tasks or just (lots of) light containers?) would help us point to more suitable hardware.

2 Likes

Hi, I came back again with new results from my search to fix all these mistakes I made before.

Here are the new things:

AMD EPYC 9534 2.45/3.7GHz Processor.

SAMSUNG M321R4GA0BB0-CQK DDR5 4800MHZ RAM DDR5 Memory RAM.

GENOAD8UD-2T/X550 MOTHERBOARD Motherboard.

LSI CARD SAS 9300-16I HBA Plyistyg0rgb2a8kw With these cables ** CableDeconn SFF-8643**

Ultrastar DC HC580 - 24 TB Hard drives.

** SUPERMICRO CSE-847E26-RJBOD1 - Supermicro CSE-847E26-RJBOD1 JBOD 4U 45x3.5** Chassis with two power supplies of 1400w.

So what do you think about these new components ?

The main purpose of this NAS is to store data made by results of some AI output and other things like that.

Some docker or virtualization will be used too.

Thank you so much for the quick responses, cheers.

There’s no kill like overkill… You could run the AI itself on that class of CPU, rather than the mere storage!
This chassis is a JBOD: It takes the drives but no motherboard, so you’d need another chassis for the motherboard, alongside a -8e or -16e HBA. (If going for -16, 9305 is better than 9300).
Mind that, in practice, AsRockRack “Deep micro-ATX” motherboards actually require a chassis which could fit CEB “extended ATX”. It may be easier to go for regular ATX size and enjoy more PCIe slots.

3 Likes

I’m glad to hear you haven’t purchased all the items listed in the original post, the word “invested” made me think otherwise.

As stated by etorix, the components are very likely overkill. To help people help you, please be more specific about what you want the system to do. There’s a huge range of requirements that could be triggered by specific requirements. You might only need a system that’s much cheaper.

Another consideration is noise, it is ok to be loud? If it’ll be away from people there are plenty of options. If it’s going to be near people you need to be creative with components.

Knowing what system/s are to be connected to the NAS will allow people to see if you are under utilising current hardware. You might already have systems that can handle things that don’t need to be on the NAS.

2 Likes

Thank you for your response etorix.

Which chassis would you recommend me then to use with this kind of hardware ?

Thank you for the clarification.

The main content that the NAS will process would be Images, RAW Files, Text files, SLTs and 3d formats.

There’s no trouble about the budget of the project, as it is a prototype.

Yes, the noise is not a problem since there’s already noise coming from the machinery.

Our leaders told us to use new hardware anyway, so I guess it’s ok to not take in mind the actual hardware.

Regards and thank you so much for your help.

I can read specifications, but I’m not personally familiar with Dell/HP/Lenovo/Supermicro lineups. Hopefully someone else could help you navigate pre-build options, if buying new.

My thoughts would have been to look for refurbished storage chassis, or whole servers. If this is going to be mostly storage, the EPYC 9004 is gross overkill. Single socket EPYC 8004 would do, or any equivalent Xeon 6, as well older generations of Xeon Scalable and EPYC 7000.

1 Like

Thanks for adding some context. When you say process, in what way and how large are the image and 3D files? With the text files, do you want to analyse them with a specific outcome in mind?

1 Like

My goal is to find a simple chassis to build all the components on it :slight_smile:

We are not interested in HP or DELL pre-build machines.

Thank you anyway for your response.

By process, I mean that the NAS is gonna have writes and reads constantly.

We are not sure what tasks will the NAS do, because our leaders just told us that they want a NAS with those requirements and later they will make use of it.

They also would like to be scalable.

I’m sorry if I’m not been that much precise about the use of the NAS, but that’s the only thing we know. Unfortunately I cannot tell the way the NAS is gonna be used because of this business confidential tasks.

Thank you for your help, appreciate it. :slight_smile:

That helps, unknown future requirements are a tricky one. I’d suggest going for storage centred with capability for running basic Docker containers. Future computing requirements are better left for when you know what that is. The exception would be if there’s only this chance to build a box and getting authorised for a compute centric system later is likely to be hard or time consuming. I’ll check back in around 6 hours to reply fully, have some urgent things to attend to.

1 Like

It’s fine, I will wait for your possible response.

Good luck with your jobs.

Regards.

How very Dilbert-esque!!! They clearly haven’t not set you up for failure.

Personally, I think that the only way to be certain to meet an unclear requirement is to go much much much more OTT with the proposed solution than you are currently doing. My guess is that:

  • “Large amounts of data” means several thousand petabytes. You will clearly need an entirely new datacentre to house all those disks.
  • Don’t forget that you will need a second new datacentre for your offsite backups.
  • And a massive data pipe between those data centres for the backups to happen.

Turning now to server hardware:

  • It sounds like your management haven’t been explicit about how high the availability needs to be, so you will need to plan for a very high availability solution. That means several servers, and they probably also need to be in two locations - now you need even bigger data centres and a much much faster pipe between them to support the real-time traffic rather than just backups.
  • There is no point in putting the VMs on separate servers, because then you will be shipping data between storage servers and VMs over a network, so presumably your management will want you to run all the VMs on the same TrueNAS server. So you will need MASSIVELY powerful servers with MASSIVE amounts of memory for all the VMs (because AI is extremely memory hungry).

Disks

  • Presumably, the disks also need to be extremely fast to be able to feed the AI monsters, so they had better all be NVMe - or even Optane.

etc. etc. etc.

The budget for the above is now several $100k (if not several $m). So my question is whether:

  1. Your management will sign-off on this budget; or

  2. Be clearer about what the requirements are so that you can tell us and we can help you spec the hardware rather than iteratively critique your own specs (which I think you might well admit are being based on you not having expert knowledge).

Specifically:

  • How many VMs, running where, on what sized LAN, with what storage and storage I/O requirements
  • How much storage, what performance requirements

I hope this helps. :grin:

3 Likes

OK… I’m clearly not enough in business to figure out the context where you’re given the mission to go out and build up a new server with no precise requirements but expressly avoiding any pre-build solution. Obviously @Protopia has the right instinct about it :grin:

If I were to pick up parts and overkill is not an issue:

  • SuperChassis 846XE2C-R1K23B, or just about any other chassis on this page with which does not sport “JBOD” and actually takes a motherboard;
  • because single socket is enough for storage, SIENAD8-2L2T or, if you want a crapton of RAM in there H13SSL-NT
  • your pick of matching EPYC 8004 or 9004/9005 (high cores not needed for storage)
  • 6/12 DDR5 RDIMM to fill these memory channels
  • 9305-16i HBA, or -8i8e is you already anticipate to add even more drives through anadditional JBOD chassis (but there are plenty of PCIe slots to add a second HBA just for thes JBOD).

Weird set of requirements.

We work with people that have no clue about hardware specifications, but they can usually describe their desired end-state so that the nerds can figure out what hardware will make their dreams come true (with a little headroom).

I’m not sure how you went from a previous-gen 16-core desktop CPU to a latest gen 64-core EPYC.

For context: I’m sharing ~800TB of data, a couple VMs, and ~40 docker containers, and my old EPYC 7702 (2nd gen, basically ancient) sits around ~5% utilization.

Since you don’t have a clear picture of the workload, I would recommend buying a 2nd hand server for cheap and see how it goes. File servers do not need much compute.

As posted above by others, this is not the usual way to do this.

A few other things to consider:

You would like new components but not pre-build, not having someone with experience usually pushes companies to vendors who offer support. You are in the unenviable position of spending a lot but having no support or proper pre-purchase requirements. You might want to look at suppliers in your country that can support you, there will be a cost but that will be worth every penny if things go wrong. It’s also likely that you’ll save a good amount on hardware by not overspending.

Going without support and putting a system together yourself often leads to situations where inexperienced people can’t figure out hardware or software set up - things can crop up even for the technically confident. Paying someone to sort that, with pre purchased components, is going to be pricey.

Security… There hasn’t been any discussion about that (or ongoing technical tasks such as managing updates and ensuring data integrity/restoration). Is there anyone in your company who is capable of securing that box and keeping it secure, reliable, backed-up and up to date? If not you have another reason to go with a vendor that sells lots of machines. Hardware and software support are different so you would need to make sure it covers what you need. Using a server that’s very common allows for easier support, they don’t need to care about thousands of possible components, just those that are offered/supported by them.

Imagine the system can’t be put into operation in time, or stops working, or you lose data, or someone gets access to your data. I wouldn’t want to be the one “responsible”, hence I’d strongly suggest that you summarise the advice from posts here into a small document that you can discuss with the person who tasked you to do this (as long as they are receptive to discussion). Get their response in a format you can refer to later if it goes wrong.

I’m worried this will go badly and you will need to ensure you are not the one to blame. It’s worth the time to do this, they might agree to things they said they didn’t want when you point out the potential pitfalls (and they will know that you alerted them).

1 Like