Advice for best setup of new TrueNAS Scale

G’day all - although I’m a TrueNAS newb, I feel fairly safe rating myself highly on the power user scale. I have an extensive homelab environment and over 30 years working in technology.

For the past (way too) many years, I’ve been languishing with some really crappy home NAS devices, and making SMB and NFS work for me. But, I’ve run out of my limited storage space, and wanted better homelab functionality, so I finally put enough dollarydoos aside to bite the bullet and build a “real” NAS.

My shiny new build is sitting in a Jonsbo N3 case, running on a Core i9 with 64GB of RAM. TrueNAS is installed on a mirrored pair of 1TB NVMe M.2 SSDs, and I’ve installed 8 x 16TB WD Ultrastar DC SATA drives for primary storage. Currently connected via a single 1Gbps interface, have one spare on the mobo if needed.

My existing homelab is predominantly a 3-node Proxmox cluster (HP DL360s), all connected at 1Gbps on a fully managed switch. All Proxmox nodes backup/snapshot their guests via PBS guests on alternate nodes, on local Proxmox LVM vols that are synced across the cluster.

My desired outcomes are:

  • Two parity drives (I’m assuming RAIDZ2 is the right layout here)
  • HA for many of my Proxmox workloads, including my OPNsense VM and lots of Debian CTs
  • With the exception of Proxmox OS itself, I want all cluster data (CT volumes, etc) to use my new NAS
  • SMB and NFS access for my desktop clients and other devices - RasPis, etc - for some file shares
  • Access within the Proxmox CTs for the above shares, plus a few more dedicated to their purpose (eg. NFS shares where I store nightly Postgres dumps, etc)

I guess my primary question is this: in order to keep things simple, would I be best just having a single, large ZFS data pool over iSCSI, and run a Proxmox guest that provides SMB and NFS access, or am I missing a more optimal approach?

Secondary questions I have swimming around in my head are:

  • Should I go for a separate, physical storage network, or can a dedicated VLAN meet the same need?
  • Is there any practical benefit to having TrueNAS deliver the SMB and NFS shares directly?
  • Assuming yes to the above, should I create multiple datasets on the same RAIDZ2 pool - one for each use case (ZFS/iSCSI, SMB and NFS)?

I’ve been doing a lot of searching around but - honestly - the overwhelming majority of hits I get when searching for “proxmox ha truenas zfs” are for articles where people are running TrueNAS as a guest under Proxmox, and Proxmox holds the physical storage. Promxox, of course, has their own recommendation which is how I got to thinking ZFS over iSCSI is the right solution for me.

So, if anyone just has a really good article bookmarked they can point me to that discusses a setup similar to what I’m trying to achieve, I’m more than happy to do my own reading too.

Many thanks in advance for all advice, tips and suggestions.

For iSCSI the recommended layout is mirrors.
With 16 TB drives and double redundancy (which I think is the right call for such big spinners), that means 3-way mirrors. Ouch!

Excellent article - thanks for sharing.

Honestly, my VM needs aren’t huge. Right now, I’m only just using over 50% of the 1.2TB of LVM vols on each of my DL360s, so even carving out 3 of my 16TBs as a 3-war mirror for those would be more than ample.

And, with the remaining 5 drives, I’d still get ~48TB usable for my remaining needs. Given I’m coping with only about 22TB (from 2 x 16TB raw RAID5 NASes), that probably more than doubles my space for things like Plex, etc.

But now it’s got me thinking I should be factoring the 1.2TB on each Proxmox node into my thinking. If I carve out a 3-way 16TB mirror on TrueNAS for my VMs (and to give me HA across the cluster), then the local LVMs could very easily be used for snapshotting my VMs, and other, similar backup tasks.

More thinking to do. Luckily I have a long weekend away camping to ponder it some more.

Thanks again for the share.

With such numbers, get some SSDs for your block storage in a small mirror pool, and make a raidz2 with all HDDs for your big SMB/NFS shares.

Believe it or not, that was actually part of my original plan. I have a spare M.2 6-port SATA adapter that I bought, as I misread the specs on my mobo, and thought I had 4 x M.2 slots, but it only has 3.

So all 8 SATA ports (including 2 onboard) are used right now. But, the mobo does have a single M.2 E-key slot, so I’m researching if there’s a low-profile riser card I can put into that (not a lot of clearance under my CPU cooler).

Better NOT use that, as there’s most likely a port multiplier in there.
Do you have any free PCIe slot? (By the way, what’s the full spec?)

The 6 port ones are not necessarily terrible. The issues are that their physical construction can be poor, they can flex and overheat.

D’oh! Yes, of course. I overlooked that - I have an unused PCIe 4.0 slot. Might be a bit tight with the cabling, but that shouldn’t be too hard to figure out.

So you think I’d be better off freeing up the M.2 slot, and getting an 8-port SATA PCIe card instead? That way, I could have all 8 HDDs on that adapter, and put in a couple of SSDs using the onboard ports.

Edit: nope, can’t really do that either. No room in the case for any more drives.

You get a SAS HBA card that is recommended in the Documentation or Forums and attach your SATA drives to that.

1 Like