Also I measured the space I wanted to put the N2 and it doesn’t fit anyway… so now I’m also looking at the N3 and similar sized cases, which would allow for growing from 5 to 6 to 7 to 8 drives if I started going hog-wild on media content.
As I understand it, the Pro versions are functionally similar but branded differently from the non-Pro versions. Presumably there is a price premium associated also.
The Pro version suggests that you will get OEM support should something go amiss re: ECC support, the other one will not. However, how real that support is in real life is really vendor dependent. That german youtuber Wolfgang that runs all sorts of interesting industrial Asrock boards gets custom BIOS’ sent to him, regular folk are told it cannot be done.
Well, I would like to try and take you back to the X10SDV series that @etorix suggested. I have been running a small TrueNAS Core server on an X10SDV-4C+-TLN4F for several years with 32GB ECC memory and 4 x 4TB HDDs. Plex is running in a jail and it’s never baulked at transcoding any of the content which includes ripped full-size Blu-ray titles. Where Plex drops the transcoding rate it is always the network which is not up to scratch; the 4 cores / 8 threads of the D1518 are really just at walking pace. If you want a bit more ability, there are 8C/16T boards, or even up to 16C/32T.
Other advantages are (a) that Supermicro provide an IPMI which allows you to investigate the server features and make corrective changes if something seems off; (b) on later revision firmware boards the drive capacity can be increased from the 6 SATA ports by bifurcating the PCI into 4 x 4 lanes and with a suitable cheap 4 x NVMe card in the PCI socket and another NVMe socket on the board itself you can add another five at some stage - pretty much future-proof at a total of eleven drives; (c) there’s two 10Gbe ports provided as standard if you get the right board (-TLN4F suffix); (d) with the right case, the board supports hot-swap when your drives start to fail.
Building a server can be fun, but it’s not like putting together a PC. I read these forums a lot, and every time I see issues with hand-built systems I thank goodness I held out for a professional-grade SOC board.
I would critically revisit what you really want the server to do, the power savings required, and try to forget about X10SDV ‘aging out’. Some would say was ahead of its time; there’s lots of experience using it on the forums; which I have benefitted personally, though I bet that most of the time with one of these boards you won’t need so much advice. If you keep a lookout, they are really not that expensive second-hand, for what you get. I recently picked up an X10SDV-TLN4F (8C/16T) rev 2 board for 200 GBP.
I have the X10SDV-7TP4F, it is a great board for folk who want to go with a SATA based file server that also has some expansion capacity and 10GbE SFP+ built-in.
With the benefit of hindsight, X10SDV-2C-7TP4F would have been a better fit for my use case as my aspirations of running zone minder, blue iris (in a VM), or apps like Frigate on the NAS were shattered by the reality of what the 1.7GHz cores in the D1537 can actually deliver and/or the realities of getting Apps going.
The -2c- variant remains likely the best board for folk who want a file-server-oriented rig that needs up to 20 SATA connections. Unlike my model, the 2 cores tick quicker (which improves SMB performance) and 2 cores has zero impact in my use case because there aren’t that many users accessing the NAS at once.
FWIW, per the dashboard I do manage to max out single cores during large file transfers, suggesting the NAS CPU is now the gating factor. The extra 500MHz that the D-1508 can deliver in the -2c- model could thus help.
My main reason to look for alternatives to the -2c- is simple curiosity. The intel offerings are getting long in the tooth, and given how much the data / network consumes in my home, it is also prudent to look for alternatives that offer lower power consumption.
The intel 700 series NICs consume 5W less than the 500 series on the x10 board. Similarly, the LSI 2116 HBA on the motherboard glows in a IR picture. Bare motherboards with lots of PCIe slots thus have their place by allowing users to swap in better NICs and HBAs over time.
The AM5 series motherboards is pretty tasty on account of enough slots being available to host a HBA, NIC, and a few NVME devices to put together a performant NAS that can host a low-power Ryzen like the Pro 8300GE. Having observed how well Asrock Rack treated customers with the AVR54 bug in their ubiquitous C2750D4i boards, I am reluctant to buy from them again.
Not gonna lie, the number of variations for Supermicro boards is also a source of analysis paralysis, and one of the reasons I still haven’t pulled the trigger after YEARS of thinking about this. That and cyberjock who drove me away from participating in the old forums.
Would anyone here argue that one of the original (Topton or CWWK) N150 boards with TrueNAS SCALE would be a step backward from my i7-860 gamer rig running Win7 with “hardware” (south bridge so really software) RAID with NTFS?
(From the “better is the enemy of good enough” department)
Good summary.
But Xeon D-1500, and possibly Atom C3000 boards, with some luck and/or pateince, would come second-hand, lowering the cost. Using RDIMM rather than ECC UDIMM is a secondary cost benefit.
Xeon D actually easily allows transcoding acceleration… with a low-profile Arc A310 in the PCIe slot.
Since you’re considering AM4 APUs, you may add
- Xeon E with iGPU:
E-2100G/2200G, or Core i3-8100/9100, and C246 motherboard
E-2300G and C256 motherboard
high up-front cost (and motherboard availability woes, especially in mini-ITX), but better QuickSync transcoding than AMD
No arguing against that. But it has no successor as a platform for low power NAS, other than Atom C3000.
For professional use there’s not even a contest: Atom C3000 comes as embedded solution with long term support.
For home use, it comes down to use case:
Pure SATA storage => Atom C3000
Also want some PCIe lanes for secondary NVMe pool => Ryzen / Xeon E / Xeon D
No official ECC validation. Xeon D/E/Scalable, Atom C3000, EPYC (including 4000, in AM5 socket) are officially validated for ECC operation; Ryzen CPU/APUs are not.
From AMD the situation is (almost) simple: Most Ryzen (desktop) CPUs unofficially do ECC, Pro or non-Pro; Ryzen (mobile) APUs only unofficially support ECC in the Pro version, non-Pro do not support ECC at all. (So there must be a small difference in the monolithic die between Pro and non-Pro.)
What happens at motherboard level depends on the manufacturer… with a potential range from “ECC UDIMM are allowed but ECC is not used” to "ECC works, only without certification
".
Pro APUs are initially an OEM thing for business-focussed mini-PCs; with time, the parts end up in retail channels. You may now find Ryzen Pro 5x50G, or the Ryzen Pro 5x55G refresh (no apparent change), at retail, but eBay is probably the best source overall.
I would not be obsessed with GE parts: GE has lower TDP but idle power, the metric which matters most, should be the same between G and GE; 65 W for G parts is not that much anyway… and there’s possibly a BIOS option to downclock to 45 W TDP. The premium for “best bin” GE seems very steep.
Look for a case first, paying attention to the cooling model for drives. Embedded boards are great for mini-ITX; if you go for a micro-ATX case you will then have many more options with socketed boards.
There are usually, but always, some numbers in the text about idle power and full load, in a configuration which is not fully described, is certainly not NAS-type, but provides some guidance. This is where I got that Xeon D-2100 idles at higher power than a D-1500 uses at full load…
I think it is even 800-series, as the family goes up to 100GbE; not sure that it makes a further difference over 700-series, or that SFP28 uses much more power than SFP+.
I’d personally already take AsRock Rack over Asus, such as the Pro WS680-ACE that seems weirdly popular for building NAS—and has thus brought here its share of support issues and flaky BMC-on-a-card. But here, with 6 of these 14 SATA ports coming out of three ASM1061 (ASM10xx
did someone at ARR found a forgotten cask of these oldies in a dusty wharehouse?) I’m not so sure.
So it boils down to W680 (8 SATA, DMI x8 uplink) + Alder Lake vs. B650 (4 SATA, DMI x4) + Ryzen Pro APU (or EPYC4000). (Better Intel chipset vs. better AMD CPU?)
I have the Asus w680 pro, and I fully agree with you. It is fun as a hypervisor to run stupid things on, but I wouldn’t use it for my storage needs (and I don’t).
Things I don’t like about it; expensive ecc dimms, support for 13 & 14th gen intel (yes, this is a negative; the instability due to manufacturing defects on those gens is insane if you’re not lucky & it isn’t immediatetly obvious), personally had issues with several bios versions causing boot failure, the bmc card is stupid to wire manually (at this point it’d be cheaper, more effective and not take up any pcie lanes to use a raspi for the same effect), I’d argue asrock support beats out asus’ support any day of the week, etc.
I think I was attracted to it because I happened to have a spare lga1700 cpu - I should have just sold that & bought used enterprise. I feel this is a common pitfall for new builds, some kind of strong pull to use semi consumer grade equipment.
…is otherwise fine
To be honest, Asus is not responsible for the price of ECC UDIMM, and the Raptor Lake woes are fully on Intel—12th gen is fine, and I understand that 13th/14th gen non-K CPUs are fine because they do not ramp up high enough to be affected.
Thanks for sharing your experience!
It’s the many issues with Intel in recent years that shy me away from using them. I really did look through board after board with LGA1700, for example, searching for something that could compete with low power, high-SATA port count, 10GbE SFP+, and ideally Oculink that can run NVME or SATA.
For now, the X10SDV-2C-7TP4F is the most likely board I’d go for with my present rig, simply because it would be slightly more performant than my present rig and require minimal changes - i.e. re-use present RAM, heatsink + blowers, SATA ports, etc… Overall, by far the least expensive option at around $550.
Were I to upgrade, I’d likely consider the H13 motherboard from Supermicro simply because with the addition of a Intel x710-DA2 10GbE SFP+ card and a decent LSI internal HBA I get something close to my present rig - with the important benefit of a low-power but super performant CPU in the form of the Ryzen Pro 8300GE. I hear the 94xx series of HBAs have better low power performance, but the headaches associated with tri-mode do not seem worth it?
Anyhow, many choices out there, but low power boards with a SATA file-server orientation are becoming a endangered species now if you want something new. Folk who need a lot of SATA ports are likely best off buying bare motherboards with plenty of x8 PCIe slots and using HBAs. Pity that no reputable manufacturer has yet to release a PCIe 4x4 HBA?
FWIW, I landed on
- Case: Jonsbo N3
- PSU: Silverstone 300 SFX
- CPU and mobo: N150 on CWWK M8
- RAM: Crucial 48GiB DDR5 non-ECC
- Boot/Apps: 2x Patriot 1TB NVME
- Data: 4x WD Red Pro 20TB (on sale recently)
Core parts were built last week and have been running great after 72 hours of memtest86+. First HDD has arrived and is partway through badblocks; remaining three HDD’s arrive this weekend.
I did end up virtualizing TrueNAS on top of Proxmox, with PCIe passthrough for the two SATA controllers. So far I am very happy with the setup ![]()
I do need to decide if I’m going to over a week for a full pass of four badblocks patterns, or maybe go a different route like the cryptsetup recommendation from Arch wiki.
In the meantime I’ve exposed a zvol from Proxmox to TrueNAS for a “play area” flash pool, to practice workflows, permissions, shares, etc. while I wait for the spinners to be ready.
edit: Actually there is one thing; I found out about PWL (Preventative Wear Leveling), and… yeah, I’m going to have to figure out a workaround of some kind. It isn’t bothering me right now because badblocks is keeping the drive busy, but I fully expect a thunk every 5 s (esp. with four drives not perfectly lining up) is going to drive me batty ![]()
Also, I need to dig into the ASM1164 SATA controllers, my cables, and my backplane; currently two of my drives came up at 6.0 Gbps, but two of them came up at 1.5 Gbps, severely slowing my burn-in testing (wished I’d caught it before I got far along; faster to let it finish now).
I know at least one of the drives, in the slot it’s in now, was working at 3.0 Gbps or higher (I didn’t check at the time) and is now at 1.5.
More to come ![]()
I recently looked into this. IIRC, the 700 series consumes about 1/2 the power of a 28SFP. The Fujitsu spec sheet shows up to 5W of power for the x710-DA SFP+ card, the 810-based QFP25 solution is allegedly between 12-14W, and that’s true even for 10GbE.
Curiously, there seems to be little difference between the 25GbE versions of the x710 and the E810 in terms of power consumption, the big hardware delta seems to be limited to the PCIe interface. I’ll have to find those links for you.
The dual port Sfp+ 10GbE versions of x710-DA uses a PCIe3.0x8 interface while the e810 uses PCIe4.0x8. It’s curious how the RJ45 version of 550 can run on PCIe3.0x4, however. I’d embed links but this version of the site and Apple safari iOS keep not playing nice with each other.
Maybe I’m being a bit dense, but what is the benefit of running PCIe 3.0x8 for a SFP+ interface that can be fully saturated with just two PCIe 3.0 lanes? Is it just a better mechanical seat? Or perhaps power in case the NIC is configured with RJ45 Copper transceivers?
Wouldn’t it be possible to run a 10GbE NIC at full speed on just one PCIe 4.0x1 slot? Why do they then ship w/ 8 lane interfaces?
There are some cards line Mellanox ConnectX3 with PCIe3 x4, which are totally fine with that bandwidth. However, I’ve seen many cards with single ports but footprints for a second one. The manufacturer uses only one PCB for both versions saving costs.
One of the reasons for having this many lanes is the fact that PCIe has got a nominal bitrate but you need more to access registers, read buffers etc. There are command codes, memory addresses and much more that needs to be transferred besides the actual data. And there are gaps between the (small) command packets.
I hear you. It’s just surprising to see 10GbE cards with 2.0x8 interfaces alongside 4.0x8. I’d expect the number of lanes to shrink from the 2.0x8 (32Gb/s interconnect bandwidth) starting point as the PCIe bus got fatter. The 4.0 bus can carry 32Gb/s on just two lanes and even if you put in a quad SFP+ interface, that’s only 4 lanes worth of data while using equivalent interconnect bandwidth.
Your explanation makes perfect sense, especially if the card is designed for up to 25GbE or even 100GbE use and the hardware is then later gimped / value priced down to 10GbE, 25GbE, etc. Thank you!