Homelab setup with Proxmox and virtualized TrueNAS help/sanity-check

Hi all, I am configuring my homelab/NAS device and need a sanity check and some advice. This is my current HW setup

  • Motherboard: MSI X570S PG Riptide
  • CPU: AMD 5700X
  • SSD: 2TB NVME on M2_1
  • RAM: 48GB 2666Mhz ECC
  • HDD: 4x Seagate 10TB directly connected to the SATA controller
  • GPU: 2x RTX A4000
  • NIC: Intel x520-DA2

I want to run Proxmox as the Hypervisor and then run TrueNAS scale as a VM on Proxmox for my NAS. My goal is to run a bunch of VMs and containers like Ubuntu, Windows, Ollama, Nextcloud, PhotoPrism, Plex, Radarrr et al, a couple of webservers with Traefik and the likes on Proxmox. Not all of them would be running all the time (would be used for testing/learning things like cybersecurity and data engineering). Also there wouldn’t be very many users for the Webservers/Plex.

I wish I could separate the NAS into its own thing, but because of space and noise and budget constraints I really can’t. I also don’t have a lot of flexibility in changing the hardware.

Since all the PCIe x16/x8/x4 slots are taken up by my GPU, I also don’t have any to spare for HBAs. I have a couple PCIe 4.0 x1 slots available for expansion if necessary.

Now my questions are:

  1. I’ll be passing the entire SATA controller to the TrueNAS VM. So do I partition the NVMe and use it for installing Proxmox and TrueNAS?

  2. How much RAM should I allocate to the VMs? Is it possible to do dynamic allocation?

  3. Should I use NFS or SMB for accessing data on the TrueNAS VM from my Proxmox VMs/containers. I read about how NFS uses synchronous writes and would be slower. Would SMB be better in that case. Do I need to get a separate SSD for SLOG if I use NFS? What speeds could I get, in theory, in either case?

  4. I’m planning to use RAIDZ2 on my pool. In the future if I wish to expand my storage, would be possible to add another vdev with just 2 more HDDs in mirror mode? I do not plan to expand beyond that for now as I don’t have such high data needs. Even 20TB is way too much for me (famous last words).

  5. At some point I plan to add two more SSDs in mirror mode to act as “special devices” that would be used for storing metadata which in theory should make my pool run faster. But since I’m out of ports, is it a good idea to use a PCIe x1 to M.2 adapter like this (Amazon.com: GLOTRENDS PA09-X1 M.2 NVMe to PCIe X1 Adapter for 2230/2242/2260/2280 M.2 NVMe SSD, PCIe X1 Installation : Electronics)

  6. If I plan to do some video editing directly from the TrueNAS on my Macbook Pro, what’s a good idea to increase read speeds? This is not something that I would be doing frequently so not a high priority. Except for increasing RAM would reserving some space on the NVMe for L2ARC help?

  7. How can I back up the Proxmox installation and settings? Incase the NVMe fails or something. Is it possible to back it up inside the TrueNAS VM. How I recover the TrueNAS VM/Proxmox VE in that case?

  8. Adding to the previous question, can I back up the other VMs/Containers running on the Proxmox VE, inside the TrueNAS VM?

Any other general advice/tips would be greatly appreciated. I know its a lot and I would be very grateful for any inputs.

P.S: I’m an ML Engineer with plenty of terminal experience so I’m not worried about my hands dirty, but still a noob in networking/virtualisation. I would like to (try to) setup everything as optimally as I can from the beginning so as to not have too many headaches in the future.

Generally, not recommended, there a quite a few discussions about this on this forum, as result of people asking for help because they experienced problems (i.e. loosing their data)

I’d check if Truenas EE has its own versions of these that meet your needs and allow you to run Truenas on bare metal.

1 Like

I’ve read on many places that TrueNAS isn’t good as a hypervisor. Does that still hold true for the latest rc (Electric Eel). What would I lose by having TrueNAS as a hypervisor vs using Proxmox for hypervisor? I’m also considering using just Proxmox with ZFS and using Proxmox for shares as well

That wont work. Or the chances that this will work are very, very small.

Onboard SATA/SCSI adapters/controllers dont like to get passed through to a VM.
You will find plent of threads about this in the Proxmox forums (one is mine).

I did want to do the same with my my supermicro MBD-H11SSL-I-B.
After I tried every single suggestion on the Proxmox forums I eventually got it to work - but that success was short lived as the ID of the controller for some reason changed every couple reboots, which means that after a reboot it could happen that no SATA ports were passed through to the VM as the ID changed.

If you want to get HDDs passed through to a TrueNAS VM, use an PCI-E HBA.
If you want to pass through m.2 SSDs to a TrueNAS VM, use a PCI-E card that comes with a PLX chip (do NOT try the cheap bifurcation route or you will find yourselfe in a world of pain some time after you deployed everything)

I had to learn that the hard way. :-/

1 Like

Onboard SATA/SCSI adapters/controllers dont like to get passed through to a VM.
You will find plent of threads about this in the Proxmox forums (one is mine).

I did want to do the same with my my supermicro MBD-H11SSL-I-B.
After I tried every single suggestion on the Proxmox forums I eventually got it to work - but that success was short lived as the ID of the controller for some reason changed every couple reboots, which means that after a reboot it could happen that no SATA ports were passed through to the VM as the ID changed.

Well that settles it. Won’t be using TrueNAS, and a Proxmox-only setup is the way to go. Just curious tho I saw many recommendations for passing the SATA controller and many people who seems to have had success with it. Also what setup did you end up with?

If you want to pass through m.2 SSDs to a TrueNAS VM, use a PCI-E card that comes with a PLX chip (do NOT try the cheap bifurcation route or you will find yourselfe in a world of pain some time after you deployed everything)

I’m out of PCIe slots at this point, but do you know of/have experience with PCIe 4.0/3.0 x1 slot to M.2 adapters? If they work and are stable?

Well, which version? There are/were many changes in this regard.
Truenas Core, based on BSD with jails has long been the standard, then there is the newer Truenas Scale based on Linux and Kubernetes and right now there is Electric Eel with Docker-based apps.

But you are probably right, if your focus is on apps and vms, Promox will likely the right choice. If you need storage via a NAS and run a few (or more) apps, than Truenas will be the better choice.

I run 4VMs on Scale, one is Windows. Runs flawlessly, wouldn’t have any need of running anything else.

H11SSL, 128GB ECC, AMD EPYC 7302, 10GbE SFP+ NIC PCI-E
Connected to the H11SSL SATA ports:

  • 2x 250GB SATA SSD → mirror for the Proxmox boot-pool
  • 4x 1TB SATA SSD → Z1 pool for the Proxmox VMs and LXC

TrueNAS 24.10 RC2 runs as VM:

  • its bootdisk is a 120GB vdisk (Proxmox)
  • LSI SAS3008 passed through to TrueNas with 5x 16TB HDDs (Z1 in TrueNAS)
  • nvidia GeForce 1050 passed through to TrueNas (for hw transcoding in plex)
  • KALEA-INFORMATIQUE 4xM.2 PCI-E passed through to TrueNas with currently 2x 1TB SSDs connected. simple mirror pool for docker containers I run on TrueNAS

TrueNAS is the datastore at my home (smb) it also hosts:

  • Portainer (to manage docker containers, the APPS feature does not fulfill my needs yet)
  • Plex Server with hw transcoding
  • 2 onedrive instances to auto-sync directly to/from my NAS storage
  • syncthing (some backups to an old unraid system)
  • watchtower (to update my other docker containers)

On Proxmox I have a couple more VMs and LXC containers:

  • Lancache (for game downloads/updates)
  • Logging (Grafana)
  • a windows VM
  • etc.

Reason why I prefer that seperation is that Proxmox is really good when it comes to quickly spinning up a VM, backup, restore, etc.
I can always switch to a different NAS without impacting any of my other services - like on that machine I just migrated from UnRAID (which also ran as a VM) to now TrueNAS over the last few days. :slight_smile:

Your problem isn’t only PCI-E slots, it is also PCI-E lanes.
I am not sure if you can even run the setup as you explained. consumer CPUs have a very limited number of PCI-E lanes and so certain functions of the mainboard only work in specific configurations.

i.e. (I have this with one of my motherboards) by installing a m.2 nvme in “slot2.1” I lose SATA ports 5+6
or by installing 2 GPUs they both only run in x8 mode and the 3rd PCI-E slot is disabled entirely.

The mainboard manual explains the limitations.

This is why I also had to move to an EPYC CPU and a supermicro mainboard which offers a lot more PCI-E lanes and slots.

You can get a good deal on ebay on used parts (I got my last 2 servers that way). :slight_smile:

1 Like

PCIe lanes are PCIe lanes, whatever form factor they come in. So, yes, these adapters should work and be stable, with the obvious limitation of a single lane for devices which could use the bandwidth of four.

That would work, but you should rather use the new raidz vdev expansion feature to add more 10 TB drives to your 4-wide raidz2. Or replace all drives by larger ones, which may well make financial sense a few years from now.

Generally your intial plan looked very lospided, and potentially dangerous.
Consider whether you could have a small server for Proxmox, with mostly SSDs, and a bigger and noiser server (but not necessarily very big and noisy) for TrueNAS. Four drives now, and potentially six later, would fit in a shoebox-size Fractal Design Node 304 case.

1 Like

Very big shoes! Closer to two boot boxes really.

TrueNAS makes an adequate hypervisor if your needs are modest. The benefit is running TrueNAS bare metal is fast and reliable and provides access at maximum speed to the storage for the apps, containers, sandboxes and VMs.

And TrueNAS has the best and easiest to use NAS/ZFS capability bar none.