New TrueNAS Core Setup and New Lab / Upgrading!

Hello everyone,

I am planning out a HomeLab upgrade. Presently I am running two Dell R720 servers, both with ESXi 7.0.3, both with local storage on them for the VMs. One is more of a backup that I send replica’s to in case the main once goes down. I also have a Synology NAS where my photo’s, movies, PC Image backups, etc. are kept along with actual backups of my VMs (Nakivo). Windows folders are all redirected to that NAS (SMB), I then have another Synology NAS offsite that this NAS backs up to daily (OpenVPN server on pfSense). For my VMs I run pfSense, HomeAssistant, a few WS2022 installs (plex media server on one, WSUS on another along with PRTG for monitoring and the controller software for Ubiquiti, NextCloud, PowerPanel BE (I have a large UPS that can shutdown the hosts if needed in a power outage and bring them back up). So, there is the background, current set up and use.

My upgraded HomeLab (I say lab but really, its prod) will be consisting of two Dell R740xd (12 x 3.5” drives and 4 x 2.5” drives) systems. The VMs will all be the same, but I am now bringing TrueNAS core into the mix (which I have never used up until testing and learning over the past few months).

The servers I am looking to spec out as follows;

Main Server - 24/7 always on, Plugged into to UPS

-TrueNAS and pfSense installed on BOSS (2 x 1TB NVME) with pfSense setup to boot first then TrueNAS
-384GB of RAM, 128GB Dedicated to TrueNAS

HBA330 passed directly through to TrueNAS with the follow drives and vdevs;
-4 x 2TB SSD drives in a 2 x 2 mirror for my VM Datastore (iSCSI?) and a small portion for SMB for Windows Download folder and SyncThing)
-5 x 8TB HDD in Raid Z2 for movies, documents, photos, etc.

Standby Server - Also a backup device for the main TrueNAS server

-TrueNAS and pfSense installed on BOSS (2 x 1TB NVME) with TrueNAS and pfSense set to auto boot (pfSense HA mode)
-192GB of RAM, 64GB Dedicated to TrueNAS

  • HBA330 passed directly through to TrueNAS with the follow drives and vdevs;
    -2 x 2TB SSD drives stripped for my VM Datastore (iSCSI?) and a small portion for SMB for Windows Download folder and SyncThing)
    -(TBD) x (TBD)TB HDD in Raid Z2 for movies, documents, photos, PC Image backups, etc mirrored from main server.

The difference between current setup and the upgrade will be, I am thinking to have both servers on 24x7 and pfSense in HA mode so I can update things as needed and still have routing. (TrueNAS and will VM’s will be off in these cases).

So what I am looking for here is some feedback on this setup and also your experiences with NFS vs iSCSI datastores and anything I might be overlooking. I work with VMware as a job on the daily but mostly my role is keeping it updated, spinning up VMs, etc. Pretty well versed in IT. Have not used ZFS before. So did a lot of reading.

I should mention also, I have a Ubiquiti Enterprise switch and will be passing each server through to it (10GB) and also, each server will have a direct connection (10GB). Debating getting a Mellanox Connect x3 for a 40GB Direct connection.

Thanks for taking the time read through all of this! I look forward to your comments, questions and suggestions!

Welcome in the family! A few resources you want to read before committing:

And if this is not enough for you, the following resource has an extended list.


Assuming you have rerad the linked resources, you will have noticed that:
  1. You want to make sure your HBA is flashed in IT mode (you can check by running sas2flash -list)
  2. If I remember correctly, CORE does not like Mellanox cards.

And I guess that you will end up considering using SLOG, hence… read the following!

Cheers!

1 Like

Thanks. I’ve read that majority of these which has brought me to where I am today and the feedback and suggestions I’m seeking on my original post. :blush:

2 Likes

Which could explain why your original post looks like a well designed plan.

The 40 GbE NIC (4x10 GbE actually) may not be useful in a home lab with a few clients. You’d be better with a single 25 GbE link, except you will not saturate the link with a single spinning raidz2 vdev or a pair of SATA/SAS SSDs.

1 Like

You could benefit for 1TB of L2ARC in your main system, likely persistent and metadata-only.

1 Like

Thanks!

I may just stick with a 10gbe link between the hosts and also the switches for now.

I’ve been very back and forth on going with TrueNAS. All of those above linked articles were very daunting at first. Add on understanding ZFS (and I still do not entirely, ie, L2ARC, SLOG) lol.

Right now I have a 4 bay NAS in a raid 10 for my media and the VMs l run on local datastores. I’m quite nervous I’m going to fire this all up and it’s going to be crawling. I’ve read some posts about poor speeds over the last few months. For home use, it’s not as though I’m looking for blazing speed. Haha

Even for the rust drives, what is your personal opinion on 2 mirrored vdevs vs 1 raidz2? Thank you!

Regarding what? Performance? Mirrors for block storage, VMs, iops intensive tasks; RAIDZx for streamed writes, storage of large files.
Resiliency? Depends on the number of drives and the layouts… mission critical data is usually kept in 3 or 4-way mirrors.
Space efficency? RAIDZx wins all the way.

I do intend to use mirrors for my VMs. They’re not doing a lot most of the time which is why I was going to do a 2x2 mirror. I debated 2x3 but where I have a second standby server which is already somewhat overkill, I didn’t want to go oven more overkill. I went with the assumption that a 2x2 mirror would deliver similar performance to my current ssd raid 10 in the existing server. The difference will be if I present the storage as iscsi or NFS. That I would love feedback on. I indented to keep my current setup running and test this entire setup before I go prod with it just to see the differences first hand and which I prefer.

I host a pretty big plex library that we use daily and a few others do also which is why I’m curious about the raidz2 vs mirrors. Presently, these files are on the NAS raid10 and there is no issue. Reading that with a raidz2 you get the performance of one drive, I’m mildly concerned it could become an issue if we’re watching something on plex and a backup starts to run to those drives at the same time.

Rule of thumb: If you don’t understand a feature, you probably don’t need it.

With only 4 drives, space efficiency is the same. Mirrors have better IOPS and are flexible; raidz2 is more resilient. For home use, I’d go raidz2; put VMs on a dedicated SSD pool.

A new feature coming down the pike in Electric Eel will tilt the calculus even more towards RaidZ2

(Can I mix any more metaphors?)

RaidZ Expansion.

You can easily (in the future) expand a 4-way Raidz2 to 5-way or 6-way, increasing storage efficiency

You can’t increase the storage efficiency when using mirrors and it’s hard to convert to raidz2 after the fact

IIRC it’s more complicated than that… I remember talks about the space to parity ratio being locked on the same proportions even after expanding the vdev. But, it should be a (at least) 9 months discussion on the old forum. Maybe things changed, or maybe I remember wrong.

It is more complicated, but a mirror will never get better than 50% storage efficiency.

When you add another disk to a Raidz2 you get another disks worth of space.

Existing data continues using the old stripe size.

At least I think so :slight_smile:

Existing data continues to use the old parity ratio until it is rewritten. You can let that happen naturally as you access and modify data or you can replicate and rewrite the data all at once manually to recover the space.

We have a calculator here (thanks to @yorick for putting in the initial heavy lifting): RAIDZ Extension Calculator | TrueNAS Documentation Hub

2 Likes

I assume this feature would be for Scale and not core?

Correct.