I’m looking for help selecting some used servers. I’m leaning towards supermicro servers with X10 motherboards. I’m looking at 1U, 2U, and possibly 3U or 4U servers.
The 2U and 3U/4U boxes will be running TrueNAS, We’re also used to running TrueNAS as the only VM on a hypervisor, and using SATA pass-through to get the drives directly connected to the TrueNAS instance.
We’ve been using ESXi for this, and would love to migrate that to Proxmox.
Proxmox seems to be much more supportive of older hardware. ESXi 7 and later, much less so.
There will also be a Plex instance somewhere, so “transcoding”.
Thinking hard about 1U servers with 10 2.5" hotswap bays - 8 slots to a RAID card (for 3 mirror pairs and 1 or 2 warm-spares). The other 2 bays might be swell for (mirror?) boot drives.
Wondering about 2U servers:
12x3.5" or 24x3.5" NAS box, and/or
24x2.5" NAS/Plex box
It’s not clear if we have the rack space for 3U or 4U servers, but if there’s compelling reasons to use these, we will if we can.
I have no idea what HBA or RAID cards would be appropriate for the hypervisor and NAS boxes.
Also interested in these boxes supporting 10G networking.
The above is at least a partial list of what I think we’re looking for.
I’d love to hear from anybody who is open to helping with this.
If TrueNAS is the only VM, then i would strongly suggest of running it bare metal. Especially considering proxmox’ latest behaviour of mounting Truenas Pools and causing pool corruption.
Stay away from any RAID cards, that cannot be flashed into a true HBA. HBA or JBOD “mode” is not the same, and can cause data loss.
Instead get a simple HBA, flashed to IT mode, and an SAS/SATA expander to connect the discs. Lsi 9300-8i + expander is known to work. Or get a a server with an expander backplane.
The wizard engineer who originally designed and deployed this had the choice to deploy TrueNAS as either a VM or on bare-metal. This was on hardware that had enough SATA ports on the motherboard to use pass-through.
I have used and deployed both IP-flashed HBAs and expanders before, and that’s fine with me.
So right now, I’m looking to spec gear that will all fit in a rack, where the deployment can be replicated in multiple data centers.
Then we need to “make it go”.
If you’re familiar with my situation, I’m both massively oversubscribed and a SPOF. I must not be the one to drive this. So I’m looking for help.
I have found a vendor who has sufficient quantity of affordable used gear that they could be a viable path, but there are enough variables here that we gotta be sure we’re getting a 100% workable solution. This includes things like making sure the spec’d gear doesn’t eat more power than we have in the rack.
If the help I can get is Person A helps with the initial hardware and then I have to find Person B to handle the ongoing TLC, that’s tolerable. If Person B is happy with what Person A did, awesome. If not, that’s just more time and $ to get things to a place where there is long-term stability.