Anyone have a "micro" TN server?

I am pondering the idea of building the smallest, most portable, yet fully capable, TN system. I was curious if any of you are running a system that might match or be close to something like this. I love my big tower, but honestly, the factors that pushed me to the large tower were:

  • ECC support on a TN compatible MB
  • HBA support on a TN compatible MB
  • 4 HDD bays
  • Cooling of HDD
  • Good Power Supply

I haven’t done any research yet, so this is my first cast of the line, but… is there away to get the bulleted features into a smaller system?

HPE Microservers. I mean, except for the HBA support, but with only four bays, why do you care? The drives run a little warm, but not terribly so.

Or get the 45Drives HL8 chassis (or HL4 if you really only want four bays):
https://store.45homelab.com/configure/hl8

And this mobo:
https://www.asrockrack.com/general/productdetail.asp?Model=X570D4I-2T

Further discussion of that chassis and other mobo options here:

3 Likes

A friend of mine has one of these microservers (several years now) and they’re quite smitten with it. They’re a power user of discerning tastes, other than their preference of btrfs to zfs, I trust their just opinions on most things. The microserver also appears to have some half height PCIe slots, so you could kit it out with high speed SFP networking or maybe even a low power GPU.

There’s also of course the iXsystems Mini line which hits most of your points as well. Similar price point to the HPE, and guaranteed to be fully supported.

The HPE has a lot more CPU grunt, and a PCIe slot, but you don’t have 1st party support for TN. :person_shrugging:

Bigger though. By quite a bit. But one more 3.5" bay, and 2x 2.5" bays, are a definite benefit.

More, yes. A lot more? It’s still pretty limited, and of course you’ll pay for that in watts. OTOH, the Denverton Atoms are almost 8 years old, while a current Microserver uses current-generation parts.

The Mini X+ has this; apparently (and surprisingly) the Mini X doesn’t.

Yep, only one way to get that.

1 Like

Yes, like 4x the processing power.
Passmark Xeon E E-2414: 12000
Passmark Atom C3558: 2400

Now, does that matter? Depends on the use case, but it’s worth noting.

5x actually, assuming Passmark’s validity. Definitely more than I’d thought. The 3758 might be a better comparison at 4614, but that Xeon still handily wins, though at more than double the TDP–though at 55W vs. 25W, there’s also a question of how much that matters. I’m running a two-socket Xeon Scalable system, so TDP clearly isn’t that important to me…

1 Like

This little box, meets all your criteria. I thought the MB might not support ECC, but it does! They make mitx server boards which support it, but the problem I found is getting a cpu cooler to fit into this chassis. The company that makes it also makes a 8 bay as well. These are NORCO knock offs, the HDD trays interswap which is the reason I use it since my main chassis are NORCO.




3 Likes


This box contains an Supermicro A2SDi-H-TF, 64G of RAM, 4*8T of hard disks, dual 10G RJ45 Ethernet plus a single SFP+ Ethernet, a M.2 NVME boot ssd for the system, and dedicated IPMI remote management. This should tick all your boxes.
https://www.supermicro.com/en/products/motherboard/a2sdi-h-tf
The mainboard has got 12 SATA ports, so no need for an HBA, although you could use the PCIe slot for one.

1 Like

Where is this port at? I looked at the specs, and it did not list a sfp+ port. If in fact it does, I think you might have sold me on grabbing one of these.

OR are your talking about the A2SDi-H-TP4F?

That’s a separate card in the pcie x4 slot.

Nice board, but ouch on the price tag. My asrock romed8-2t was less then that. The TP4F is even crazier on cost.

I guess I have enough paranoia that 4 disks, regardless of ZFS or mirroring, just triggers my anxiety - but we all panic at different thresholds. A long lead-in to the case I’ve been using for more than a decade now - a Node 304. It does need an ITX motherboard but it will take 6 3.5" disks so I can do RAIDZ2, giving me a bit more comfort when I do lose a disk, which has only happened a couple of times.

2 Likes

Lots of interesting solutions here. I have a Node 304 that houses my backup NAS. Since my crucial storage pool is about 15TB, I was thinking of something very small with a triple mirror of 3x20TB 3.5" drives. Good redundancy, quick resilver, and If I needed more space, just duplicate and/or use larger drives. Finding a suitable case for that footprint and the hardware to push TrueNAS is what I’m hoping to find. I’ve checked into many of the smaller L cases like that Jonesbo N1, but I haven’t found the ONE yet.

1 Like

I’m using an HPE Microserver Gen10+ with the stock Pentium Gold and 24 gigs of ECC RAM (which I’ll upgrade to 64 at some point). No issues but bugs with the TrueNAS distro itself. For example can’t use docker from the GUI anymore, but I can at least use portainer for that. I used it with 3x 6T drives, but decided to upgrade to 4x22TB WD Ultrastar drives and call it a day.
This server is pretty small for its capabilities and has 4x 1Gig Ethernet interfaces plus 1x PCIe if you need more speed.
The minuses are that

  • You cannot use the Pentium’s built-in hardware encoder to transcode video (which you can mitigate by throwing an Nvidia p400 in the PCIe slot and use that).
  • Only 4 SATA slots. The workaround for the OS is to use an USB drive (SSD not Flash) to run the OS from. You get the message, but it works perfectly.

You can check out what the box supports in detail here: HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Guide - Page 2 of 4

I hope this helped.

edit: regarding the budget - got the server itself second-hand for $500, and the for drives which are new for $1700

The Gen 11 has finally added a m.2 slot for this purpose (better late than never, HPE). But at least the Gen 10+ (and the Gen 8, and presumably other earlier versions) has a USB port on the motherboard for this purpose, so the boot device can be inside the system rather than sticking out the back.

2 Likes

I have

System boots from the mirrored DOMs, storage pool is on the two WD red SSDs (identical products but for the form factor).

That’s about as micro as it gets while delivering a 100% “server grade” system.

Not cheap. :grimacing:

1 Like

I looked into this and it seems to only support 32GB RAM.

The maximum RAM capacity for the HPE ProLiant MicroServer Gen10 Plus is 32 GB. It supports two DIMM slots and can use a combination of two 16GB or 32GB UDIMM modules. While some users have reported successfully running 64GB of RAM by installing two 32GB modules, HPE documentation officially supports up to 32GB of memory

I plan to go with 64 at a minimum for VMs, but this is a nice footprint.

If you can find a way to get an external power supply to work instead of having “your normal power supply” - you can shrink the case even smaller.

FOMO on the Node 304 love

@dan ‘s x570d4i-2t looks like a suitable modern replacement.

This is a SilverStone CS351 case. It houses a Micro-ATX board. I love this case, because it is very compact, but you have just enough room for everything. Take a look on the manufacturers website.

  • 5x 3.5" HDD bays with status LEDs
  • There is 2x more internal 3.5" HDD slots
  • And there is a rail system wher eyou can mount 8x 2.5" disk/ssd
  • HDD cooling is good, because the HDD cage has a fan directly on it

By default the power supply is right above the motherboard, so you are limited to low profile coolers. You may install an SFX power supply (I did) to make more room for the cooler.
Ultimately I decided to make a case-mod, and I fitted my SFX PSU under the HDD cage, and mounted a 140mm fan on the back, and installed a Noctua tower cooler.

Here is the power supply (bottom-left on the photo) mounted into the internal HDD frame. It gets airflow from the side panel constantly, and it is an efficient Seasonic 300W unit which had fan-stop function as well, so it stays cool. Please do this only if you know what you’re doing, as this is high voltage and kills. You see that I kept the insulator on it (transparent plastic sheet), but obviously the original case of the PSU is removed. For me personally, it is no big deal, as I am comfortable doing such mods.

The case is now dense-enough, and I am super happy that I could re-use my Noctua tower for this build :slight_smile:

In the case you’re interested, these are the components:

  • Silverstone CS351 case
  • Seasonic SSP-300SFB PSU (SFX, comes with ATX adapter frame)
  • ASUS Prime B550M-A CSM motherboard, M-ATX form factor
  • AMD Ryzen Pro 5750G processor
  • 2x 32GB Kingston Server Premier DDR4 3200Mhz ECC RAM (unbuffered)
  • 2x Toshiba MG10 series 20TB disk for data (mirror)
  • 1x Samsung 250GB SSD as the boot drive
  • 2x Samsung 980 Pro 1TB NVME ssd as fast storage (mirror)
  • ASM166 sata controller M.2 card, 6 port on PCI-E 3.0 x2 link
  • Noctua NH-U9S CPU cooler
  • Akasa 14cm fan on the rear (I just found it in my basement)
  • Noctua 8cm fan on the HDD cage (the original Sivlerstone fan is loud, and I mean LOUD → better to replace it)
  • EZDIY-FAB Quad M.2 PCI-E X16 Expansion Card, the mobo supports bifurcation so the NVME ssds are connected to the main PCI-E X16 slot which operates in X4/X4/X4/X4 mode
    • Intel X520 10G SFP+ NIC with M.2 adapter connected to an M.2 slot using an X4 link

NVME ssds for the VMs and fast data are connected to PCI-E X16 slot, because I need the M.2 sockets on the motherboard for the sata controller and 10G NIC (those are fully fine in X4).

ASM1166 sata controller, power efficient and does the job. I was thinking about an HBA card, but everybody said those prevent the system from going into lower power states, so I decided on this one:

To improve airflow of the HDD cage I added a empty fan frame between the Noctua and the cage, it did help a lot:


In it’s current shape and form, this server sips ~36W from the wall.

If I remove the 10G NIC and use the onboard 1G connection the power consumption drops down to ~33W. Basically this is the absolute minimum I could achieve with 2 hard disks, 2 NVME ssds, and 1 SATA ssd, and the fans connected.

This is how it looked like on the “testbed”, I benchmarked and stress-tested everything before I built the components into the case.

1 Like