TrueNAS SCALE - HD setup questions

Hi,

I’m setting up a new-used server and plan on running TrueNAS Scale. The goal is to have redundant disks for the truenas install (boot), VMs, storage. I’ve detailed the hardware below and wanted to hear if there are any foot-guns in my plan. This is a homelab server, with low-mid utilization (nothing like a office).

  • Asrock B550M Pro4
  • AMD Ryzen 5700X
  • 64 GB ECC ram
  • (2) 2.5" 256GB SATA III 3D NAND drives for TrueNAS to live on
  • (2) 1tb nvme gen 3 drives for virtual machines to live on
  • (2) 14tb SATA enterprise drives for storage (soon to be 4)

The 14tb storage is currently a btrfs raid 1 which I plan on moving to ZFS (I have a spare 14tb drive to copy data to and it’s backed up).

I’ve never used TrueNAS before as I didn’t want to virtualize it, now that I noticed it can run VMS, I plan on installing it directly on the HD.

Where did I mess up? Any tips on initial setup? I am going to use the BETA as I don’t feel like migrating the VMs from KVM.

Thank you!

I forgot to mention what VMs will initially run on the server:

  • minecraft
  • CS2
  • Forgeo
  • Woodpecker CI/CD
  • Semaphore UI

Without looking at the details of the MB/Processor in terms of ports and PCIe slots & lanes etc. this looks pretty good.

My only advice would be to use Docker images or (in Fangtooth) the new linux system containers (LXC) wherever you can instead of VMs because the overheads will be much less and you won’t need to dedicate resources to them.

1 Like

Thank you, I missed that Fangtooth will support LXC containers, I will use those for most everything then.

On reflection the one other thing I would add is that for future expansion you would be better off having 4x 14TB RAIDZ2 than 2x 2x 14TB mirrors.

To get there from your current environment you would need to build a 3x RAIDZ2 (which the Fangtooth UI may or may not allow - if not you will need to use the CLI), copy your data from the existing 14TB drive and then do RAIDZ expansion to add the 4th drive and rewrite the data to reset the parity.

If going the raidz2 route, better create a degraded 4-wide vdev through CLI than going through expansion.

1 Like

I think this depends on the technical skills of the user.

Generally speaking I would say that using the UI is preferable.

But yes, I would agree that once you have to go to the CLI to achieve what you want I would agree that probably better to create a degraded RAIDZ2 (equivalent to RAIDZ1 in terms of initial redundancy) and then resilver the additional drive to get the second redundancy than to use RAIDZ expansion.

1 Like

Thank you, I am very comfortable in a terminal.

I’ll need to research ZFS more in depth. I can order a 4th drive, and then run a RAIDZ2 setup. I appreciate the help!

1 Like