When do dual cpu systems make sense?

I’m basically trying to rice the machine I’m building and figure out whether a dual cpu system makes sense for me. I know for a fact I’m going to have 11 HDDS, and still need to figure out how many cache drives I need. Pretty sure, I’m going with 384 GB GSkill kit. Probably, two kits if I go dual cpu. I know you’re not supposed to mix kits, but if I put each on one cpu is that fine?

I’ll be picking up the various cache drives after benching. I’m mainly trying to have full system restores be as fast as possible. Believe I’ll cap out at 20 Gbps on my network. Currently, I’m backing up five machines to this machine with plans to scale it up to 100 machines over time. I’ll also be using this as a media server.

The only thing I’d definitely use the second cpu for is distcc compiles. A lot of people say it’s not necessary, but you can really bring compile times down still. I think it’s around 300 cores major projects, like Chromium, build near instantly.

Is there any benefit to having two cpus for a primarily storage focused machine? What type of workloads would benefit from that? Would it speed up access if one machine is reading while another is writing? What about dividing the storage between both cpus, might that give any performance benefits?

You should start with the basics. Understand ZFS, pool layouts, performance tradeoffs, special VDEVs, pool free space, etc. Make appropriate choices that match your goals for your system.

BASICS

iX Systems pool layout whitepaper

Special VDEV (sVDEV) Planning, Sizing, and Considerations

The only real benefit is likely to be i/o–each CPU has a certain number of PCIe lanes, and any other devices are going to consume those lanes. Your 100GbE NIC (for example) is going to take a few. So is your SAS HBA. So are your NVMe SSDs. A second CPU doubles the number of available lanes.

A second CPU also doubles your available RAM, but that’s unlikely to be an issue for any system that supports two (or more) CPUs.

2 Likes

Put this way hardly ever…
What kind of I/O are you putting in that that would exceeed the 50-80 PCIe lanes you can get from a single Scalable/EPYC?

This is not a storage task, is it?
Then it may make sense to oversize the storage server if you’re also doing heavy compute on it… but it could also make sense to separate storage from compute.

I suspect that communication between CPUs would quickly become a bottleneck. And if each CPU has its own storage, it own NIC (and its own subnet…), what’s the benefit of one twin socket NAS over two single socket NAS?

If I have the cores on my network I’d assume use them. It might actually cost more to about the same making a second computer just for distcc compile.

1 Like

Wouldn’t DISTCC require installing on TrueNAS / OS?

Dual CPU systems, besides potentially giving you more CPU cores, give you the flexibility of adding far more RAM and PCIE lanes. Generally speaking more RAM is always better in ZFS. More data is hot and ready in RAM (ZFS ARC).

Generally speaking more PCIE lanes gives you the ability to connect more storage devices (I’m looking at you, NVME!)

I think I’d have to install a daemon on the server. But, it supports FreeBSD. At worst I just connect to the command line, and run pkg But, yeah I need a daemon on every machine on my network, and the client on one.

Alright, so if I want instant restoration of backups I’d want to add some NVME cache eventually. I’m guessing before that matters I’d need to upgrade my network infrastructure.

It just sounds like having two cpus gives me options in the future, without having to worry about building a new machine.

1 Like

I mean, you can build an all NVME pool in a modern dual socket system. Server parts, even single socket boards, will already have more PCIE lanes than their desktop counterparts. There are lower end servers where this is not true, but I’m painting with a broad brush here.

You should probably focus on right sizing and not min maxing.

Can you elaborate on the DISTCC thing? TrueNAS is a storage appliance.

If I have cores on the machine I’d like to use the idle resources. I might not actually use distcc, but one of the competitors. Maybe, icecream since it has load balancing. Icecream might work better if TrueNAS is running a backup at the moment. But, distcc might be faster if the hardware is idle.

There’s also Fastbuild, but I’m not sure if that actually adds any build improvements. if distcc / icecream is already using ccache with a well written build system.

I already have 72 cores on my network. Adding the dual socket system should bring the count up to 120 to 136.

This type of use case would be better suited inside of a virtual machine or docker container inside of TrueNAS SCALE, not inside the host operating system of TrueNAS CORE.

2 Likes

I can use bhyve, and Desico is most likely forking TrueNAS CORE. They have frequent contributors talking about it.

Which Desico? I googled and couldn’t find.

zvault project never got going; XigmaNAS is the original PHP code from 20 years ago, still updated, but not a “fork of CORE”

I mean Deciso. I believe the zvault guys are still working on it. They just don’t have a reason to release anything as long as iXSystems is supporting CORE.

They added an issue tracker 3 months ago. They’re probably doing a bunch of business related stuff, and building a bunch of systems to sell.

I assume the OPNsense folks…

1 Like

Considering iXsystems is offering support for CORE for seven more years, because of corporate contracts. There is no reason for any fork to be in a rush to launch a project.

It really should not be too hard for people related to Deciso to take over CORE towards the end of those seven years. I’m sure both projects have code that overlaps, and Deciso supports ZFS on their firewalls now. Imagine having a firewall with 300 TB of storage.

Depends on what the fork wants. If they want to follow FreeBSD release for better hardware support, or want to offer something that leans into bhyve/jails, on top of storage, or want to follow features in ZFS, not just bug fixes … that all could be reason to fork MoreSoonerish™.

I expect CORE will get security fixes and critical (storage-related) bug fixes, and that’s it. No features, no dependency upgrades, OS or otherwise.

If you are installing Truenas over Proxmox, and running other Virtual Machines, you could dedicate one CPU for Truenas, and the second for the other OS and your distcc compiling, making it a dual purpose machine.
Other than that, I can’t think of a use case use where a storage server would need that much CPU available. I guess it would end up depending on how many PCI express lanes you need too.

I’m building a 4 way 10 TB cache pool in too. 4 way slog, 4 way metadata vdev , most likely, and probably a 4 way special vdev. I think I’ll have L2arc as well. If I missed any other disks I’ll probably have them as well depending on the results of benchmarks. So, I might need the lanes.

I think I’ll spend a lot of money on this server. But, building different servers for different purposes probably takes way more money at the end of the day, after resolving write and read bottlenecks.

The H14DSH can run with one or two cpus. However, my ram configuration of 12 sticks might only work with one cpu. SuperMicro probably needs a bios update, as the manual only lists 6, 18, and 24 being supported on two chips.

1 Like