Why is it so hard to find the right hardware these days? Advice needed

The hard point is about the number of HDDs and SATA ports.
To save SATA ports, you can boot from NVMe, and put your apps on NVMe.

2 Likes

Crawlspace eh? I’m in a pier and beam house with a crawlspace. That’s a cool idea. How did you set it up?

Mine is under the stairs and is a concrete floor, so not a huge issue. At some point I would like to set it in a rack there, but it works for me for the time being.

I was looking at buying a different place this year that had a true crawlspace w/ dirt ground (plastic on top). I was going to run some all-thread and uni-strut to make a platform to put my server on and would then be a lot more interested in putting a 2-3’ high rack down there.

1 Like

Your use case also brings up something I’ve asked the Truenas guys about. Maybe the enterprise just builds extra boxes, but I’m interested in storage tiers. A mix of pools under one roof, like slow metal and nvme’s in the same box. Long term storage or less frequently accessed workload data moves to metal off the nvme’s, keeping them freed up for things that you want higher performance from. There are structural concerns of course, which is why I wanted their take on it. I figure if the arc is already staging data in ram to take some strain off the arrays, same concept could apply to fast/slow pools.

Anyway I think you’re heading the right direction. Old ram won’t have this “AI tax” applied, so older hardware makes sense. Even older Intel xeon’s that fit the architecture will be affordable too, with a road map to add more performance later. Off lease, retired from the datacenter where it lived under strict environmental conditions servers are excellent.

This :backhand_index_pointing_up:

I have found the same. I have built 5 systems like this. I’ll buy good used Dell servers pretty cheap, and put new NAS hard drives in them (so that’s where I’m spending the money). Works like a charm and you get all of the enterprise features - hot swap, ECC memory, reliable processors, dual redundant power supplies, and IPMI management.

Interesting topic and discussion. Cheaper will always be “build your own”, but it will require more thought & consideration versus buying a used preconfigured system where everything has been thought out for you.

Power shouldn’t really be too much of an issue as most mobo’s will consume 100-120 watts at idle. Depending what you are doing, max power consumption can be a concern (mainly gpu and cpu). Most HDD’s will run 6-8 watts, ssd’s range 5-6 watts, nvme 4-12 watts for active use. You can add some overhead for safety concerns, take the total and divide it by .8 for your estimated power supply minimum. Your max continuous power use should always sit at 80% or less of power available. Example 10x hdd ~ 100 watts, 300 watt cpu + mb ~ 400 watts, 500 watt gpu = 1300 watts. 1300*1.25 = 1625 watts of max available power (multiplying by 1.25 is the same as dividing by 0.8). You need 1625 watts max to run the config in the example at continuous full load & we’re not including cooling solutions.

No cpu or gpu intense applications with 10xhdd, 10x120 mm fans, (100+150+24)*1.25=342.5 watts recommended.

One of the other aspects that is most overlooked is pcie lanes. What NIC’s are you running 10Gb, 40Gb, 100Gb? How many addon cards for SATA, NVMe, etc. Most desktop motherboards will be very limited with availability of pcie lanes + cpu choice. Some desktop motherboards will have additional controllers to expand on what the cpu provides, but generally you are looking at one logical 16x lane that is shared with a 2nd physical 16x pcie slot and an additional 4x if you are lucky. x8 x8 x4 lanes if you are saturating.

Modern motherboards do not consume 100W at idle unless you have something pretty massive re: compute or ancient.

For example, my entire NAS consumes about 106W at idle, and that is with a 10-yo Denverton CPU, 8 HDD and 6SSD. And that’s with no spin down or similar savings.

Folk who successfully spin down their HDDs, enable higher C states and use more modern NICs, CPUs, etc. can save even more.

1 Like

But how often do you intend your mobo to sit idle as a NAS? Even running 1 vm will put the mobo at 100 watts. I have multiple Zen 3 systems that will use 100 watts at boot. You can probably achieve lower idle consumption with an low power consumption chipset, but if you are using any of your pcie lanes expect 100 watts. I say 100 watts as a rough estimate; I’ve seen it drop to 70’s with more agressive bios settings at the wall. Boards that use sodimm can achieve better, but are you really going to use that mobo for a nas? Say you achieve 60 watts idle, it’s not that significant in power savings versus 100 watts per month. I don’t have any x870 mobo but that would be better put to use as a primary system use rather than as a NAS and last I check Zen 4 wasn’t much better than Zen 3 for power consumption. These are just general rough numbers to accommodate various setups. My NAS is actively in use throughout the day with 14xhdds, 1 nvme, 6x120mm fans, 120x360 rad aio cooler, 24 core AMD Eypc, 256 GB ECC RAM, with 3 VM’s consuming about 320 watts of continous power monitored by a UPS and validated at the wall.

Do not confuse your (Xeon Scalable-class) EPYC system for a Ryzen/Core/Xeon E system, or for a regular NAS for home use…
100 W at boot is insane.

4 Likes

:laughing: I’m not. I have 7x Zen 3 desktop workstations with various B550 and X570 chipsets. All of them boot on average 100 watts measured at the wall. Most of them are 16 core cpu’s, 1x ssd, 2 sticks of ram. Power management takes place after boot where you get S3 and various power saving features for idle that can get you into the 70ish watt. The fusion processors take a bit less, and I have a special case 35 watt 3400GE fusion AMD processor.

To clarify, measuring at the wall using a watt meter includes all power consumption including the psu. A good psu will convert 80% ac to dc losing 20%. 100 watts includes the mobo + psu measured at the outlet.

I have 8 HDDs that don’t use spindowns or any other power saving measures, 4nvme SSDs, 3 SSDs, 3 10Gbit NICs, a 10GBit Switch, 3 motherboards, 2 IPMI and have 220W measured at wall.

If your system takes 70W without any HDDs or IPMI that is a IMHO lot.

I have one box built in this case. The build quality isn’t that high and I definitely do recommend fully modular case for this, because cable management on this thing is pain. The PSU is in front of CPU, definitely check clearances on your cooler. Standard AM4 box cooler does clear standard ATX PSU, but do check orientation of PSU cooler, because PSU cooler will either be blowing to/sucking from CPU cooler, or you can place it up side down with fan facing the wall, but then it will completely cover the CPU and it’s down draft cooler. It does look pretty on the desk, but heavy perforation does little to hide the noise.

I don’t recommend Node 802 case, I have two servers in those and replacing drives in them is overly complex, you do need to keep map of disk S/N somewhere because they are tucked in such that you kinda have to know which rail/disk to take out. Also drives are hanging top down and I do recommend 90 degree angled SATA/power connectors to avoid accidental damage, straight connectors will bend heavily because the rear rail of drives is just above PSU and it will just barely clear it with straight connectors. Cooling is superb in Fractal.

I recently fielded another box in the Define 7XL case and that thing is pure perfection. It will take 14 HDDs without any heavy modification (you will have to fork out extra for those HDD trays). It’s extremely heavy and thick, it stays silent and cools well, the cable management in it is awesome, it’s really a pleasure working with that thing. You can buy older R2/R3/R4/R5/R6 case on flea sale, you really just need the skelet to start, that thing will also take 8 disks without issue in two cages (3+5, or 4+4) and has spacious internals.

Next, networking and connectivity. I was told repeatedly on old forums that Chelsio NICs are well supported and it was a good advice. Anything Chelsio I have bought so far was cheap and works out of the box without hesitation, the 520/540 cards are excellent. If you’re looking to host 8 disks or more I don’t see you doing that over 1gig ethernet, so plan for that ahead. I have the TR540 cards everywhere, usually 1 connection is to plug it directly to another TrueNAS for backups. 10 gig backups is where it’s at. For HDD, I found cheap LSI controllers work well, keep in mind the numbering might be deceiving, i.e. LSI 9300 might seem a good buy because it connects 16 drives through 4 miniSAS connectors, so why spend twice as much for 9305? Well because 9300 is obsolete weird design with two chips bridged with PCI lane and it heats like hell, to the point active cooling might be necessary, while 9305 is entirely new design and chip that can live passively just fine.

Don’t cheap out on cables.

Plan ahead. If you plan 8 disk unit, plan another one for backup. I know it costs a lot but failure doesn’t happen on Friday before 3 day weekend at 6 PM. It will happen at 3 AM. On a Tuesday. Morning before early meeting and day packed with appointments. After you learn to rely on your box with data and 24/7 accessibility.

TrueNAS really is a god send for this.

2 Likes

But your now getting into load efficiency of the psu right? The less power your psu has to supply the worst it generally does. The mobo itself could be using 50 watts and the psu is consuming the other 50 watts. The only way to know would be to have a watt meter from the psu to the mobo. For a 600 watt psu, they don’t have to justify an 80% efficiency for certification at 100 watts, so it can be whatever, say 50% efficiency. One of the things to think about when building a system. Generally I don’t care since the system will generally be under load with 30 threads running and using about 175-200 watts.

Why?

I have several A2SDi and also X10SDV series systems which with 4 spinning disks up and spinning plus a couple of SSDs plus one or two VMs (Homeassistant etc.) also mostly idle but up and running … which clock in at 60W or less.

3 Likes

If you can save power where you can, that’s great. iTX and mATX are great ways to do it. Just saying standard sized ATX with psu will take up about 100 watts idle, and if that figure is over no fowl. If he’s running 12 hdd’s he’s going to need an expansion card which requires additional pcie lanes but again if the board has 4 sata and 2 sas ports or any combination that supports 12+ hdds great. Generally a NAS on it’s own doesn’t require a lot of cpu power, so use case.

I can attach 20 SATA drives out of the box to my motherboard, a built-in 2116 makes it possible. Ditto 10GbE with a built in 520 equivalent.

More than anything, it’s a question of what motherboard and what use case. Some use cases will require 100W of continuous power, some won’t. But I wouldn’t generalize around your use case to every use case out there.

Some folk here run modern Ryzen rigs with multiple HDDs and 60w power draws. Some folk here run titanium rated PSUs where even a 10% load runs at 90% efficiency and a 25%+ load runs closer to 94%.

2 Likes

Some of those Supermicro platforms feature up to 12 SATA ports. I do not have a single system with an extra HBA.

Our larger rackmount ones are Supermicro storage systems with LSI HBA on board and backplanes for up to 24 drives.

But for home use - 12x SATA plus 1 M.2 plus a PCIe slot - what else do you need?

2 Likes

I don’t see why you’d need a backplane. The SFF-8087 cables are rather slim and easy to work with in my experience. I myself am using them in a Fractal Define 7 XL chassis.

As someone that did go 1u rack at one point and migrated away to the Fractal Design Define 7 XL, I would caution against it. It usually requires proprietary format motherboards. Space is kind of cramped necessitating tiny 20-30mm 20k rpm blowers that scream like jet engines at a really high pitch that is super annoying to hear.

Well, obviously I would recommend the Fractal Design Define 7 XL that I currently use now. It has plenty of space for expansion and large diameter fans for good airflow without needing screamer fans, and comes with noise dampeners. It is, however, expensive and heavy as hell (30 lbs empty), but it’s not like I’m lugging around the server for a LAN party and I would totally do it again in a heartbeat if I were building a server today.

For reference if you care, I’m running a Supermicro X11-SPI-TF with Xeon Silver 4210T and 224 GB RAM inside that Fractal Design Define 7 XL and a HP H220 HBA (9207-8i). Technically, I don’t really need the HBA because the Supermicro board has plenty of SATA ports, but I’m virtualizing TrueNAS, so I wanted an HBA to pass through.

1 Like

Hmm… I think the cable management is horrible, at least compared to something like a server backbone where you simply connect a few power and sff connectors. Not that I care much, but this is a downside of fractal over real server cases IMHO. I needed a longer SATA power cable (no big deal, I needed one anyway to get 10) to even reach the top drives.

sure. But does it matter much?

No. That would be 50% efficiency. That is not realistic. At least not for a half decent PSU.
Take this midrange PSU for example: PURE POWER 13 M | 550W leise essential Netzteile von be quiet!

55W load = 90% efficiency
11W load = 70% efficiency

Sorry, but that is just plain wrong and would be insane if true.

2 Likes