TrueNAS Scale Build

I have an old PC running Windows 10 that I repurposed as a NAS a few months ago by installing TrueNAS Scale. So far, my experience has been positive. However, since the machine is nearly 10 years old, I am planning to build a new server. Over the past few weeks, I have gathered the following parts:

  • CPU: Core i5-13500
  • Motherboard: GIGABYTE B760 DS3H AC DDR4
  • RAM: TeamGroup 2x16 GB, non-ECC
  • CPU Cooler: Cooler Master Hyper 212
  • PSU: EVGA G7 650W 80 Plus Gold
  • Bulk Storage: 3x 8TB WD HDD 7200RPM in RAIDZ1

This server will be used primarily for media hosting and as a file server for a side business my wife and I run, where we need to share files. Additionally, I want to experiment with VMs using the included Hypervisor.

I am uncertain about the best configuration for boot storage and VMs. I have a 256GB SSD available that could serve as the boot drive, but I am also considering purchasing a small NVMe drive. My motherboard supports 4 SATA connections and 2 NVMe drives. Given that a $30 NVMe drive is not an issue, which of the following options would be best for my needs?

  1. Get a small NVMe for the boot drive and use the 256GB SSD for VMs.
  2. Use the 256GB SSD for boot and get an NVMe drive for VMs.
  3. Use the 256GB SSD for boot and host the VMs on the bulk storage drives.

I would go with option 2.

In my mind the boot device doesn’t have to be nvme speed fast, sure load/boot time might benefit from it, but once the system has been booted most services are stored in RAM anyway and running VMs will for most part be the most benificial thing to use the nvme acces time increase.


I agree with @Redhead out of the choices you have offered, but have the following additional comments:

  1. You will ideally also need SSD space for apps (i.e. for media sharing) not just VMs. A single pool should be fine for both I think.

  2. An unmirrored drive for media-sharing apps is fine - you can replicate the pool to HDDs as a backup. An unmirrored drive for experimental VMs is fine, but I think you should think ahead to VMs becoming non-experimental, and then you should probably mirror the drive (and replication of VM drives as a backup is not great).

If you’re buying to build a server, and a server with some business uses at that, it is a pity not to go for actual server hardware.
The purpose of the VM is unknown, but so far nothing appears to require the power of a last (but one) generation CPU. And 4 SATA ports is really not much for a NAS. Since the motherboard is full ATX, I assume that the case could host more than three or four drives.

If possible, try to keep at least a free SATA port for the ease of future expansion or drive replacement.

IMHO you have to care about your Realtek integrated nic, too.
And as @etorix say, is better to keep a free SATA Port for whatever can happen. Maybe you can buy a SATA to USB adapter and attach you SSD there, instead of using the last SATA port. Or you can do that later, in case you will need the last SATA port.
I agree with the mirrored NVME for apps, with in-site replication task (im doing the same)

Being able to have VMs is just so I can play around and start to understand Linux, I’d rather host it on a server than on my main desktop as it is often used by me wife and kids.

Thanks for the input.

I decided to go with consumer grade hardware because I was given the PSU and MOBO from a good friend. I know that the CPU is over kill, but I wanted the 13500 for transcoding. So far i’ve only had to fork out $200 for the CPU and Ram. I had everything else laying around.

The case I have will hold up to six 3.5 drives and two 2.5 SSDs.

I have a USB to SATA adapter laying around so perhaps I will use that for the boot drive and and get a couple of NMVe drives for apps.

IMO (and it is just one opinion and many others will be equally valid) you would be better off buying a 2nd hand low-CPU power laptop with a small (say 128GB) SSD, and small-ish (say 8GB of memory) to play with Linux on as this will give you a hardware screen and keyboard and avoid the complexities of VNC.