Comments on a few builds Truenas with xcp-ng

Welcome to the new forums :smiley: Hoping for some feedback / roasting if need be

Replacing some old and in some cases patched together servers at my office. Also starting a migration path from Hyper-V with local storage → to xcp-ng and truenas.

For backups of virtual machines:
Supermicro 4U 36 Bay Server Storage X10DRI-T4
2x Intel Xeon E5-2690 V3 - 14 Cores
768GB LRDIMM (24x 32GB PC4L-2400T)
AOC-S3008L-8LE HBA Controller
To start, ill have 12x 10tb drives 2-Way Mirror (will add spares later)

For TrueNas VM storage:
Dell PowerEdge R740xd 24-Bay 2.5" 2U Rackmount Server
2x Intel Xeon Gold 6136 3.0GHz 12 Core 24.75MB 10.4GT/s 150W 1st Gen Processor
1024GB [16x 64GB] DDR4 PC4-2666V ECC RDIMM
HBA in IT Mode (ill need to replace the PERC H740P)
24x Dell 1.92TB SAS SSD 2.5" 12Gbps RI Solid State Drives
Dell Dual Port 10GBASE-T + Dual Port 1GBASE-T rNDC | Intel X540 I350
2x Dell PE 1600W 100-240V 80+ Platinum AC Power Supplies
2-Way Mirror 11 vdev + 2 spare

XCP-NG 2x
Dell PowerEdge R650 10-Bay 2.5" 1U Rackmount Server
2x Intel Xeon Gold 6330 2.0GHz 28 Core 42MB 11.2GT/s 205W 3rd Gen Processor
512GB [8x 64GB] DDR4 PC4-3200AA ECC RDIMM
Dell PERC HBA355i 12G SAS HBA Front Controller
2x Dell 800GB SAS SSD 2.5" 12Gbps WI Solid State Drives
Dell Dual Port 1GBASE-T LOM
Dell Dual Port 10GBASE-T Gen3.0 OCP 3.0 Mezzanine Adapter | Intel X710-T2L

Thinking I will need to increase to 25G networking between the Hypervisor and storage?

Any other glaring mistakes? or obvious bottlenecks?

1 Like

Where are your SLOG drives?
Besides that I don’t see any obvious mistake, but I’m also not an hypervisor user.

Fiber would also help with latency, just be zelaus in your research… some vendors don’t like to mix cages and transceivers: you know, propietary stuff brings more money… but you use DELL, you will know.

No UPS?

I have multiple APCs with network management cards, will use Nut to shutdown gracefully. Thank you for asking.

I’m not a Dell expert, but we can get good deals on off lease here in Thailand. I can always direct connect the Truenas to my main hyper-visor if need be…

My SLOG plan is to install a AOC-SHG3-4M2P inside the 740xd with 4x Optane, can use 2 NVME striped for my SLOG.

I got you re: hypervisor and there are things about the dataset like sync to consider. Hopefully someone with more hypervisor experience chimes in.

Thank you for taking time and appreciate your feedback

1 Like

@a.dresner

I’m very interested to see how you get on with this. I’m currently on ESXi, but I’m seriously considering XCP-ng to replace it.

It was time to do some server upgrades and Lawrence Systems has got me very interested.

I do wish xcp-ng had support for Veeam without having to install the app on each vm. The built in backup is not application aware but in speaking to the guys are Vates, they seem pretty confident that they can onboard me and make it all work out with their own backup.

Bottom line is ill get it up and going and put a few VMs and test and test and test… if it’s not cutting the bacon, ill roll back to Hyper-V which has worked well for me over the last decade.

2 Likes

I’m being told that I cannot add the AOC-SHG3-4M2P to the Dell, it won’t work. Must be a Dell Firmware lock to sell more of something… I can add a cage in the flex bay and add up to 12 NVME drives there.

Yup, that’s Dell.

How’s everything going with this one?

Have you got anything set up and running yet?

How have you found Vates to be?

Project is alive, my hardware person was out of town, he should be back soon and then things can pick up. Unfortunately I have so much travel planned that I will try to get the hardware in and start in August.

I will also be doing some speed testing between Proxmox and xcp-ng. I have proxmox running on 4 boxes, it’s very fast but can break very easily.

Then the backup situation, backing up exchange, sql. lots of testing… probably testing the whole second half of this year

Thanks for the update :slight_smile:
I briefly considered proxmox, but I’m tending more towards XCP-ng as a replacement to VMware.
I was settled on using Truenas core as the datastore, but it sounds like scale might be the way to go now.

I am 90% of the way there myself. xcp-ng is stable but I question the performance. I’m having a real tough time getting 10GB networking to work where with Proxmox, thats straight forward. I’ll be able to test better when my new hardware arrives.

There are two areas about Proxmox that haunt me (device names change when adding new hardware, and shutting down VMs can cause you to get stuck processes). However, the interface on Proxmox is way better, all the pop out menus in Orchestra are a headache. And flipping between VMs you have to go back to the VM menu and then into that VM, plus you cannot pop out your console. Finding settings in Orchestra takes a lot of digging… where with Proxmox it’s all layed out flat and logical. Going from VM to VM is super fast, pop out a console anytime…etc. I think Vates plans on improving their interface, I really hope so.

1 Like

I made some tweaks to my 2 potential builds, one is going with NVME over SAS SSD. Would appreciate any feedback?

I’m thinking faster Memory and NVME option. The individual NVME drives cost less and provide better performance. Just wonder if that Dell PE PCI-E 12x NVME Drive Expander Card will be compatible with ZFS and TrueNAS. Note from my rep " You can only use the OS to control the drives. PERC Controllers are not supported in this configuration." Which sounds good

24x SAS/SATA Option:

Dell PowerEdge R740xd 24-Bay 2.5" SAS/SATA 2U Rackmount Server
1024GB [16x 64GB] DDR4 PC4-2666V ECC RDIMM
2x Intel Xeon Gold 6246 3.3GHz 12 Core 24.75MB 10.4GT/s 165W 2nd Gen Processor
Dell PERC HBA330 12G SAS HBA Mini Mono Controller
24x Dell 1.92TB SAS SSD 2.5" 12Gbps RI Solid State Drives [46.08TB Raw]
Dell Dual Port 10GBASE-T + Dual Port 1GBASE-T rNDC | Intel X550 I350
PCIe Slot: Dell Dual Port 25GB SFP28 PCI-E CNA | Intel XXV710-DA2
PCIe Slot: Dell Dual M.2 6G PCI-E BOSS-S1 Controller + 2x Dell PE 120GB SATA SSD M.2 6Gbps RI Solid State Drives
2x Dell PE 1600W 100-240V 80+ Platinum AC Power Supplies
Dell iDRAC9 Remote Access Enterprise License
Dell PE 2U RapidRails II Sliding Rail Kit
Dell PE 2U Front Bezel
2x Power Cords

Purchase Price - $10,080 + Shipping | 2 Year Parts Replacement Warranty

24x NVMe Option:

Dell PowerEdge R740xd 24-Bay NVMe 2.5" 2U Rackmount Server
1024GB [16x 64GB] DDR4 PC4-2666V ECC RDIMM
2x Intel Xeon Gold 6246 3.3GHz 12 Core 24.75MB 10.4GT/s 165W 2nd Gen Processor
16x Dell 3.2TB NVMe SSD 2.5" Gen3 MU Solid State Drive [51.2TB Raw]
Dell Dual Port 10GBASE-T + Dual Port 1GBASE-T rNDC | Intel X550 I350
PCIe Slot: Dell Dual Port 25GB SFP28 PCI-E CNA | Intel XXV710-DA2
PCIe Slot: Dell Dual M.2 6G PCI-E BOSS-S1 Controller + 2x Dell PE 120GB SATA SSD M.2 6Gbps RI Solid State Drives
PCIe Slot: Dell PE PCI-E 12x NVMe Drive Expander Card
PCIe Slot: Dell PE PCI-E 12x NVMe Drive Expander Card
2x Dell PE 1600W 100-240V 80+ Platinum AC Power Supplies
Dell iDRAC9 Remote Access Enterprise License
Dell PE 2U RapidRails II Sliding Rail Kit
Dell PE 2U Front Bezel
2x Power Cords

Purchase Price - $11,780 + Shipping | 2 Year Parts Replacement Warranty