CPU: Atom C3758 vs Xeon D-1541

Currently my primary TrueNAS Scale system is built around the Supermicro A2SDi-8C-HLN4F motherboard, which has an Atom C3758 CPU. It works great!

I have another system (not TrueNAS) that is about to be upgraded; that system uses the Supermicro X10SDV-TLN4F motherboard, which has a Xeon D-1541 CPU. It also works great (only upgrading for fun, not really necessity).

I’m thinking about using that Xeon D-1541 board for the TrueNAS system. Despite being older, it appears to be substantially more performant than the Atom C3758.

Do I need the performance? No, not currently - but I’m looking at future-proofing. Specifically, the TrueNAS system has a SSD-based mirror and 10G connectivity to the other system, which is a virtualization/container host. All the VMs and containers use the TrueNAS SSD pool for storage.

Besides age, the only other downgrade I can see with the D-1541 board is the fewer SATA ports. But six for storage plus NVME for OS is all I need. (I have three mirrors currently; will likely increase size of drives in the future, but I don’t want to add more drives.)

Anything else I should consider or I might be overlooking?


1 Like

I would expect the Xeon D-1541 platform to have a higher idle power consumption based on the fact that it features big cores compared to the Goldmont architecture in the C3758.

1 Like

I have that same motherboard and its been amazing, first running straight linux and for the last year or so truenas scale, It may be a little more power hungry than the Atom, but it’s still only 35 watts so pretty frugal.

Sata port wise, I just dropped an LSI 9300-16i HBA in mine and it works great. Given you have dual 10gbe and dual gbe on board theres no other use for the PCI-e slot anyway unless you need a GPU.

I’ve been keeping an eye out for better options, but to be honest about the only better option I’ve come across is the greater number of core variants of the same board and the fact they are still selling at RRP or higher I think many people agree.

One thing to note, the bios on that board can be a little awkward as far as setting the correct boot device. Just does some odd things where it feels you have to stand on one leg whilst rubbing your tummy to get it to boot to the right device during install and first boot, but once sorted its rock solid moving forward. Its also worth updating the Bios and Formware to the most recent. Thats helped mine.

1 Like

Love the IPMI too, not sure if the A2SDi-8C-HLN4F has it too, but its a godsend :slight_smile:

A2SDi has ASPEED 2500 BMC. I have actually retired a X10SDV-6C-TLN4F for a A2SDi-H-TF in one of my NAS because of the upgraded IPMI on the Atom board…

That is a legitimate concern. I’m guessing that the idle power consumption between the two boards is similar. Clearly the power draw ceiling of the Xeon board is much higher. The other system upgrade will take some time, but when the X10SDV (d-1541) is finally freed up, I’ll try to take some measurements.

I’m actually using old 5000-series Solarflare 10gbe NICs in both systems. The Atom board doesn’t have native 10gbe. On the Xeon board, at least under Proxmox 7.x, I can’t get the native 10gbe to actually link up at 10G (it links and passes traffic, but one side thinks its at 1G and the other at 10G, actual throughput won’t go above 1G). Plus the Solarflare card has SFP+, which I prefer to RJ45 for 10G.

And speaking of power consumption, I’m using old Solarflare cards because they have very low power draw (5W or less per the spec sheet); they’re also readily available dirt cheap on ebay. I monitor total power draw on my UPS for my “stack” (switch, servers, firewall, etc). The long-term average power draw actually went down when I stopped using the on-board network ports of the Xeon-D board and used the Solarflare instead.

So you’re actually going in the opposite direction of what I’m considering. Per Supermicro’s website, both boards have ASPEED AST2400 BMC: X10SDV-TLN4F, A2SDi-8C-HLN4F. In your case, what specific IPMI upgrades does the Atom board have over the Xeon board?

Yeah, both A1S* and A2S* boards use the ASpeed 2400 with the exact same firmware as the X10* boards. I just double-checked.

My bad on specs. But the Atom board happily shares its IPMI over a data Ethernet cable, which I couldn’t get with the X10SDV. For a backup NAS in a remote room, it makes cabling a little simpler.

Ah yes, the -TF versions are missing the I210 NICs that IPMI can invade with its disgusting tentacles hook its claws into share with the host. That’s very annoying, I once spent an hour messing around before realizing that this was going on. Both the -F and -TLN4F versions are fine, though.

1 Like

I have the X10SDV-TLN4F board. I really like it :slight_smile:

You could add a quad m.2 carrier or an HBA in the 16x slot (which is bifurcatable to 4x/4x/4x/4x)

You may want to check out my Node 304 build log linked from my signature.

I’ve never used Proxmox, but have had no issues with the onboard 10gbe in either Linux or trunas core/ scale. Appreciate SFP+ prefered to RJ45 but I needed the extra sata ports :slight_smile:

Reviving this thread to (1) re-focus the original question, and (2) share some data that may be interesting.

The question I really meant to ask from the start is: is a C3758-based TrueNAS Scale system, with SSDs and 10Gbe network, likely to be bottlenecked by the CPU, when used exclusively for storage?

In my case, the TrueNAS server is used strictly for storage, the only service I have running on it is Nextcloud. Consider this hypothetical: all at once, someone is syncing 100s of GB of photos on their phone via Nextcloud, while another person is doing video editing (their working directory is hosted on the TrueNAS system), while two others are independently streaming two different 4k Bluray rips (no transcoding). While this scenario is almost entirely I/O bound, the CPU will be active - but will it be a bottleneck?

That question isn’t directly answered, but this ServeTheHome review seems to suggest the C3758 is more than capable: iXsystems TrueNAS Mini X+ ZFS NAS Review.

One of the things mentioned above was the worry that Xeon D-1541 has higher power consumption than C3758. Of course the power draw ceiling (i.e. TDP) is higher, but I expected idle draw to be similar - but I was wrong! I put a Kill-a-Watt meter on my A2SDi-8C-HLN4F (C3758) TrueNAS system in-situ for a couple days to get the average power draw: about 52 Watts.

Then I replaced the main board with the X10SDV-TLN4F (D-1541) - I also increased the memory from 2x8GB UDIMM to 2x16GB UDIMM. The rest of the system was unchanged. Average power draw over 24 hours went up to about 67 Watts.

That was surprising. Then I thought some of that extra draw might be due to the on-board NICs which I’m not actually using (opting for the Solarflare PCIe NIC). I disabled those last night. (And for anyone who stumbles on this thread and wants to do the same: you can’t disable the Ethernet controllers in the BIOS, you have to do it via physical jumpers on the board itself.) I did this last night, so only about 12 hours on the Kill-a-Watt so far, but it’s down to about 60 Watts.

So it indeed looks like the A2SDi-8C-HLN4F has a lower power draw than the X10SDV-TLN4F, though the doubled RAM likely accounts for some of the added draw. Now I’m running some power consumption tests on the A2SDi-8C-HLN4F alone - stock TrueNAS install, only a single SATA SSD for OS. I’ll disable the 4x on-board i350 NICs and also try different amounts of memory, including UDIMM vs RDIMM.

Ultimately, it’s looking like I’ll move back to the A2SDi-8C-HLN4F, as long as I’m confident the CPU itself is unlikely to be a bottleneck.


My gut feeling is that 10G networking and/or disk I/O will be the bottleneck in your test scenario, not the CPU.

1 Like

It will depend on how the data is shared, what is being done to it (compression, encryption, …). The main CPU limitation is likely to be when dealing with single connections that aren’t multithreaded (e.g. SMB).

That said, even a C2750 could saturate 10GbE over SMB, with multiple users, so it’s unlikely that a C3000 CPU, especially a C3758, will represent a meaningful bottleneck.


The D1541 has far more PCIe lanes at it’s disposal than the Atom board. If future-proofing is a concern, that is the board to use. The watt delta will be minimal under similar work conditions, with the D1541 board being able to run far more cores than the Atom board.

Assuming you can run Turbo mode w/o throttling, the D1541 will be marginally faster at 2.7GHz vs. 2.2GHz for the Atom. Between core count and core speed, that might make a difference re: SMB, but only if you have a number of users and the network, RAM, and the pool are not in the way.

I am very happy with my motherboard even if the -2C- version likely would have served me better with the benefit of hindsight. That version costs $500 less and has a higher core speed, which is better for my use case since all the other add-ons I thought of running here (zone minder, VMs) turned out to be unusable.


Agree to a point. If fast storage is part of the use case, then the far higher number of PCIe lanes typically available with D-Series motherboards are super helpful. Ditto on-board 10GbE, HBA, etc. Atom boards are usually good for Mini-ITX and little else, D-series boards come into their own at Flex-ATX and up. A D-series board in a Mini-ITX is a waste of a perfectly good CPU since all those PCIe lanes will not get used.

I have yet to get even close to saturating a 10GbE, but that’s the result of a single-VDEV pool populated with HDDs. Even so, I have managed to saturate a single core in my CPU at times when the sVDEV returns far more data than the CPU / SMB process can handle.

1 Like

So 8W the difference… and 16GB of extra RAM.

Its been a while since I’ve tested it, but I recall being surprised at how much extra power a stick of ram drew…

For science it would be interesting to reduce the memory of the Xeon D system to match the Atom one :wink:

I think it’ll boot with just one stick.

EDIT: Quoting myself

DDR4 DIMMS use 6W per 16GB link

(wow, that thread was a blast from the past, and guess what, the predicted build out is actually where it at now 8 years later ;))

1 Like

I don’t see how the draw can be that different once all the non-CPU deltas have been accounted for. Generationally speaking, the two processors are not that far apart and the OEM in question is the same. It’s not like trying to compare a MacBook Pro running an M3 vs. a MacBook Pro running an i7 and then wondering why the 4-year older system cannot handle decoding a video stream as long or as well as the M3-equipped MacBook Pro.

I had no doubt that the on-board SFP+ ports on my motherboard consume power, just as the HBA does. Hence, I use both. I expect the biggest power draw in my system to be the HDDs, followed by the motherboard and then the SDDs.

1 Like

If fast (= SSD) storage is part of the equation, go Xeon-D with a SAS HBA / bifurcating risers / a PLX card as appropriate.
SATA ports on A2SDi boards are not all equal: Only the individual ports reach > 500 MB/s while the ports from SFF-8643 connectors are limited to half of that—enough for HDDs, limiting for SSDs. Of course, even with this limitation, a pool of 10-12 SATA SSDs on an A2SDi-H board is a fair match for the on-board 10 GbE NIC.

1 Like

Exactly, it is these kinds of compromises that turned me off the C3xxx series of motherboards - the designers do a lot of magic to open as many ports as possible but some of them have to share PCIe lanes with a card slot, etc. Ultimately, a C3xxx motherboard can be a good fit for a small NAS (sub-10 drives?) via an on-board HBA but anything beyond that really calls for a CPU with more PCIe lanes available in the first place.

What I found so hilarious between the C3xxx series and lets say the D-1508 version of my motherboard is that the cost difference was practically nil. The only performance benefit was a smaller form factor via Mini-ITX vs. Flex-ATX. By any other metric (other than perhaps a watt or two in idle) the D-series motherboard is more performant and a better choice - IF you need to address more drives, want to host VMs, etc.