Help with Ex-Enterprise Hardware Selection for First TrueNAS Build

I will preface this by saying I’m located in Perth, Australia. Our hardware market is less abundant than the US.

I’m looking to “build” my first NAS for media storage, share drives and backups. The current plan is to run a 6-wide Z2 in a refurbished 2U 12LFF server. This will leave me with capacity for another vdev if I need to expand. Alternatively, I could go with 8LFF which may increase chassis availability. It seems I should be looking for a Dell R740xd, HPE GL380 G10 or an equivalent Supermicro. I’ll rack it in a half-height cabinet which I’ll pick up once I know the server length.

I haven’t bought server hardware, nor specced a box for TrueNAS before so I’ve only got a basic understanding of what I’m doing. What should I be looking out for and are there any gotchas? I know I need a HBA controller or a card that can be set to run as pass-through.

There seems to be a number of refurbishers operating on EBay, some also have websites and others only have EBay stores. Are there any other places I should be looking for vendors? It looks like there’s also US EBay sellers that ship internationally which may increase availability and save money but could also end up being a massive pain.

Is Dell 14th gen/HPE 10th gen a good choice? Some of them are going for proper money. going back a generation seems much cheaper but then it’s even older and potentially more power hungry.

Hopefully that’s concise enough, thanks for your assistance,
Wes

To be precise: You need an LSI HBA flashed to IT firmware. That is very different from passing though drives on e.g. an Adaptec RAID adapter.

1 Like

I would try to avoid a machine with 2 CPUs.

What max. power consumption do you consider acceptable?

1 Like

I’ve used a couple of IT resellers on ebay out of WA and had great experiences with them. As for the box specification, what are you trying to do? If you want a basic NAS and nothing else, 16GB RAM (they’ll all come with ECC) and the slowest, lowest core count CPU will nett you the lowest energy consumption. If you want high speed network (10Gb plus) you might get benefit for SMB with a higher clock speed CPU, but probably not if using other protocols.

If you want to run virtual machines, then the CPU and RAM should be allocated accordingly. You’ll often find server gear comes with a minimum of 32 or 64GB RAM, so that’s not really an issue, but if you want to support lots of users for example and/or high speed networks, then lots more RAM as cache (ZFS calls if ARC) will probably be useful with spinning HDDS. Depending on generation, you can get a max of 384 or 768GB RAM per CPU, so if you want more than that, you’ll need to use 2 CPUs but I suspect you’re far away from that use case.

Aside from that, ordering one with 2 power supplies is probably a good measure (they’re pretty cheap) and decide if you want to run them all the time for redundancy or just put one on the shelf as a backup.

Lastly, if you want video cards, you’ll need to investigate what will physically fit in the chassis you want and whether special riser cards/cables are required to power it.

PS - since you’ve not bought server hardware before, are you prepared for the noise of a 2U server?

HPE I know are highly specific about third party add in cards. If the firmware doesn’t recognise it, it’ll run the fans very fast to ensure nothing overheats since it won’t be able to read temperature sensors on stuff it doesn’t recognise. This is also a legacy problem when a card may well be on an approved list for a Gen 10, but it won’t work in a Gen 9 because they won’t update the firmware for the older one.

Final note, genuine drive caddies can be hard to come by for reasonable money after the fact. If you can get them with the server when you first buy it, I’d suggest you do so. This is quite important with HPE gear for the above noise reason. Third party stuff will work as far as talking to the drives, but may well annoy you in other ways.

PPS - replacing 8 bay with 12 bay LFF can be done but is generally a really expensive headache. There are other options like a cheap SFF chassis and using a drive shelf (or DAS Direct Attached Storage) to the server but this will use more energy than a single setup.

Also, for future, the Gen 8 12LFF has only two connections to the backplane since the backplane has a built in expander. I think for the Gen 9 and 10 they dropped the expander, which means you’ll need 3 ports to open up all 12 bays since the SAS cards have 4 lanes per port. It’s not hard to buy a second two port HBA down the line if you need it, but it’s something to be aware of if you’re on the fence about wanting 12 drives instead of 8.

You’ll have to do some research on your own for Dell since I know very little about them.

Lastly, unless you’re running VMs, it will probably spend pretty much or all of its time at idle or close to it. There has been good improvements in performance per watt of modern CPUs compared to previous generations, but there isn’t much of an idle improvement in the xeon E5 series you’ll find available for those servers. On that basis, you may well be better off saving a chunk of money and getting an older unit, especially if the selection of LFF chassis is a better fit for what you’re after. For the same reason, DDR3 vs DDR4 RAM probably shouldn’t be a significant driver either. Both are cheap and plentiful and volume will be a bigger driver for a NAS than bandwidth.

If you want to boot from an SSD, you will probably also need a 3.5" to 2.5" adapter for the LFF caddies if the chassis doesn’t have another mounting/power option for them. You won’t find sata power connectors in a server for you to add one loose inside the case either.

1 Like

Thanks for the responses @ChrisRJ and @bonox.

I would try to avoid a machine with 2 CPUs.

Except for entry level 1U units, most of the stuff I’ve seen is dual socket. I’ll have to check motherboard compatibility to see if any can be run with one removed and the socket blanked. I won’t be running up against the memory limits so that’s not a concern.

What max. power consumption do you consider acceptable?

I think I just need an idea of what’s achievable. I’m not going to be spending much time running at max so the concern is avoiding a large idle consumption. If my maths is any good 132W is a dollar (AUD) a day. Hoping it can be run for under that.

I’ve used a couple of IT resellers on ebay out of WA and had great experiences with them.

Would you please PM those to me, not sure on the rules around posting vendors.

As for the box specification, what are you trying to do? If you want a basic NAS and nothing else, 16GB RAM (they’ll all come with ECC) and the slowest, lowest core count CPU will nett you the lowest energy consumption. If you want high speed network (10Gb plus) you might get benefit for SMB with a higher clock speed CPU, but probably not if using other protocols.

If you want to run virtual machines, then the CPU and RAM should be allocated accordingly.

This will be for storage and maybe some containers like Jellyfin but no VMs. My needs would probably be served by a storage appliance but where’s the fun in that? Gigabit will be plenty fast in practice. 10GbE would just be handy for full machine copies which are pretty infrequent.

Aside from that, ordering one with 2 power supplies is probably a good measure (they’re pretty cheap) and decide if you want to run them all the time for redundancy or just put one on the shelf as a backup.

Most of the machines I’ve seen come with two included. My home power isn’t redundant so I’d just be running one in practice.

Lastly, if you want video cards, you’ll need to investigate what will physically fit in the chassis you want and whether special riser cards/cables are required to power it.

I won’t be adding GPUs to this. If I need video to bail me out I’ve got a single slot GT1030 I can throw in (if I get lucky and have the riser included).

PS - since you’ve not bought server hardware before, are you prepared for the noise of a 2U server?

I’ve fired up a hairdryer tower before. I was under the impression that a 2U would be audible but not loud. There seems to be plenty of people running them in open racks in the same room. I was thinking I’d be able to abate some of the noise with the cabinet too.

There has been good improvements in performance per watt of modern CPUs compared to previous generations, but there isn’t much of an idle improvement in the xeon E5 series you’ll find available for those servers.

The options would be between Xeon V4 and Scalable Gen 1. I haven’t found concrete info in terms of idle differences between them.

PPS - replacing 8 bay with 12 bay LFF can be done but is generally a really expensive headache.
… if you’re on the fence about wanting 12 drives instead of 8

Yeah, I certainly don’t want to mess with reconfiguring the chassis. I think 6 drives is enough for now. I just need to do some maths around capacity and expected growth. I need to add up all the drives my media and data’s partitioned across. It can’t be physically more than 38TB at least.

In reality I can probably get away with a 2 drive expansion capacity. It probably makes more sense to grow into a new system when I hit the space limit rather than having the extra bays.

If you want to boot from an SSD, you will probably also need a 3.5" to 2.5" adapter for the LFF caddies if the chassis doesn’t have another mounting/power option for them.

Yep, I’ll make sure there’s capacity for boot drives. I’ve just been thinking in front of chassis numbers for the array itself.

Another question is, what your budget is to buy all this?
(For the already propsed, my 2 cents is to avoid 1U servers. They are really lod, and it is a real pain to replace those high pitch, 40mm hairdriers, because 1U compatible, LGA2011 or later PCU fans are practically non existent.
(Of course, if you have 3D printer and experience, you can create anything)
I would also be really careful with 2U too, since the 3U high GPUs might not fit into them and half height GPUs are really rare too (at least new).
Actually you dont need a GPU for running the server, but need one to install the OS.
Best CPU cooler for 2U is: Noctua NH-L9x65.
My favorite GPUs for server (for only install or just keep it for debugging too) are:

  • nVidia NVS 300 (only with the Y splitter, if you have an old, VGA or DVI compatible monitor)
  • nVidia NVS 295, if you have modern, DisplayPort monitor
    Do you plan to install Plex of Jellyfin into a container any time later?
    If yes, AND you want to transcode the media on the sever, you MUST have some kind of GPU in your system.

these things have impi - you don’t need any external GPU at all.

The ones he’s interested in will come with oem coolers that will be more than adequate.

You can also remove a CPU if it comes with two and you want to reduce energy consumption. You don’t need a socket cover but it would be recommended in case you ever want to put the second one back in (they’ll fill up with dust) If you ask the seller, they may have a spare one lying about anyway.

I can’t remember off hand who I bought mine from but they’ll pop up in an ebay search - they had WA in the name and I think they were posted from kwinana. Ebay is surprisingly unhelpful at letting you know where stores are. I’ve bought a lot of ex server gear from around the country and they’ve all been packed pretty well with good communication (especially at dealing with dead on arrival or other issues) and fast service. That’s probably because they’re businesses with commercial customers.

…or you can use your CPU block as a “cover”.

Running a server that comes with 2 redudant PSUs on only a single one can be a very mixed experience.

In some cases it just works, assuming the system load isn’t too big. Hint: HDDs need much(!) more power for spin-up than regular operation. In my case a 500 W Seasonic PSU was too small for 8 Seagate Exos X16 16GB and upgrading to 750 solved the issue.

But there are systems that, while they continue to run with N-1 PSUs, will not start if one doesn’t have power. I also remember one case where a very loud alarm sound was made as long as power was not redundant.

As to noise your mileage may vary as well. The warning about fans running at full speed if a component isn’t recognized should be taken seriously. At full throttle at least I always think about a starting jet engine. It is much louder than normal, and that is saying something.

Good luck!

Kind-of repeating myself, but still :wink:

The noise can be a real challenge. As with most things in life that is solvable, but some effort may be involved. To be clear: I don’t want to give the impression that I am against a 2U rack case.

When I built my current system I went for a Fractal Define R6. It is a very nice case, but the cooling for HDDs sucks. Those are mounted on sleds that surround the drives on both sides and the bottom. So the HDDs’ exposure to airflow is small.

Result: I had to mount two industrial fans (i.e. rackmount case volume) to keep them under 40 C degrees. A 2U rackmount case would likely not have been louder, bit with much better access to replace drives.

A lot of enterprise HW has fan control implemented in their BIOS-es, but most of the time it is set to always on all the time for safety.

I agree!
My main server has 4x80mm fans in it and I still think it is way to loud. one of my new projects is to replace them with Noctua silent fans
My 1U server is even louder, even with my replacement 40mm fans.
( bought a fully gutted Supermicro 813 case, I heavily modded it and added my functions, I needed)

I chose Supermicro and am very happy. Dell and HP tend to have more proprietary specialties while Supermicro uses more starndard parts - at least that is my impression over the last 20 years.

I’m using several pre-owned Supermicro servers both at work and at home. Most are from X11 series, sone are from the older X10 series. The X10 machines are so old they can’t boot from NVME (but they can use it afterwards), but otherwise fine machines. No problems except for the usual broken power supply and faulty disks. Perfect machines for Truenas and Proxmox.

I chose Supermicro and am very happy. Dell and HP tend to have more proprietary specialties while Supermicro uses more starndard parts - at least that is my impression over the last 20 years.

Yep, it seems like Supermicro is the least restrictive/most compatible option. The only problem is it’s the least available locally. I’ve decided to avoid HPE because they hide firmware behind service contracts and the H2XX HBAs for them are quite old. Plus the fan issue.

There’s an CSE-829U-10 that comes with an X11DPU r1.10, 2 Xeon Gold 5120s and caddies but no RAM, rails, nor storage for $500 AUD that might be a good deal. It comes with a MegaRAID 9361-8i, did you have any luck with yours or did you swap it for something else?

Another option is a ~$1400 R740xd with 2 Xeon Silver 4114s, 128GB RAM, HBA330 and boot SSD. It just needs caddies, disks and rails.

Both seem like reasonable buys.

The MegaRaid is not safe to use with TrueNAS. You need an HBA like LSI3008 or LSI9300 etc etc.

Ok, I know I’m biased because of my own experience but if you are going to run VMs on NFS or iSCSI you are going to want to run in striped mirrors (like RAID10) and it is easy to get 15k RPM 2.5" drives.

I purchased an HPE DL380 Gen9 that came with two drive slots in the rear where I have 2 SSDs for the boot pool (mirrored) which leaves all 24 SFF slots in front for 15k RPM drives, A p440ar disk controller that I put into infrastructure mode. (You don’t NEED an LSI card).

The built in LOM network card was 2x SFP+.

You can remove one CPU to reduce the power but you have to then pay attention to which PCIe slots you are using.

It also came with a SAS expander card and I added an HP D3600 LFF 12 drive SAS shelf for the slower 3.5" drives (12x6TB).

$135 for the server, $350 for the HP D3600 LFF expansion shelf.
…plus drives of course.

No regrets.

Rails could be an issue, but DDR4 RDIMM should be easy to come by. As for storage, as long as you have the caddies, I’d expect you want to bring your own drives.

I’ve just bought the last two of the Dells and 14 drives. Time to pick up caddies, rails and the cabinet.

This way I can have two identical spec machines with a replicated backup. I’ll run a 6-wide Z2 with 2 cold spares in inventory.

When it all arrives I’ll run bad-blocks on all the drives then use one of the machines to get familiar with the system.

1 Like