Questions about virtualized TrueNAS

Hello all, just has a few questions regarding runnign TrueNAS virtualized underneath Proxmox with HBA passthrough.

I recently started a discussion about the Aoostar WTR Max asking if this would be a good box to run TrueNAS on. After discussing it with etorix, I started to wonder where the costs were being cut to achieve such a low price. Then I started to think about just using my existing Proxmox server with a fdew upgrades to make it ECC and be able to pass through an HBA for TrueNAS. Couple questions for the community though regarding this approach.

  1. what is the current feeling on virtualized TrueNAS under Proxmox? I’ve seen numerous topics on the setup and there seems to always be someone recommending against it due to silent ZFS corruption. Is this a valid concern or mostly just a myth? This will be for homelab use so nothing I will be doing is mission critical, but I’d prefer not to have a corrupted ZFS pool.

  2. Is it possible or advised to use a virtual disk as the Log or Cache VDEV or do you want to always use a physical device passthrough? Mostly just interested in a Log VDEV for async NFS shares but curious about cache VDEVs as well.

Any thoughts on this setup are appreciated!

You will get mixed reactions with this question. One of the major issues is ā€œdisciplineā€ with ensuring all your VMs on Proxmox which may connect to TrueNAS, have been powered down ā€œbeforeā€ powering down the TrueNAS VM. You can see how this would be critical. I have been virtualizing TrueNAS on ESXi for over a decade now and I don’t think it is difficult to manage the VMs properly, however you do need to this this correctly.

When dealing with TrueNAS as a VM, you should never use a Virtualized Disk ā€œEXCEPTā€ as the boot drive. Pass though all other drives. Also, do you need these extra drives? Most people with home use would not have a great deal, if any, benefit from these.

I only use virtual drives when I’m testing TrueNAS, typically a new version release. I will not place any critical data on the virtual drives and test the crap out of it. I’m testing for the new features, generally not drive related as a virtual drive is not the same as a real drive.

Hopefully all of that made some sense.

2 Likes

Search this forum and you’ll find multiple reports of pool corruption under Proxmox, resulting from not blacklisting the controller.

Always controller passthrough for ZFS (in production). More so if performance is the point, and even more if the point is data integrity (SLOG).

As already pointed by @joeschmuck the real question is whether you need SLOG and/or L2ARC to begin with.

If you do mean ā€œASYNCā€, a SLOG is of no use at all.
If you meant ā€œSYNCā€ (no ā€˜A’), the question is whether these shares do require synchronous writes. Data shares can be set to asynchronous.

1 Like

I have two TrueNAS systems virtualized on two different Proxmox hosts using HBA pass through. I use one TrueNAS as a backup, it’s the smaller of the two with less memory allocated but has the same amount of storage as my main TrueNAS. I’ve got 32GB allocated to my main TrueNAS and it has 7TB RAIDZ1 volume. I use SMB, and NFS to share files and I have a WebDAV server to make my Volume available over the Internet for cloud storage as I’m able to mount the WebDAV share as a volume on my Mac devices while out and about. The WebDAV has excellent R/W performance on a WAN. The WebDAV share is authenticated, use TLS and cloud flare tunnels. Anyway, I’ve never had any ZFS corruption using a virtualized TrueNAS, I think that’s all urban legend rumors. TrueNAS runs virtualized quite well with HBA passthrough and I get a lot of bang for my buck using those two hosts as ProxMox servers. In addition to TrueNAS, I have a virtualized UNIX desktop VM using GPU passthrough, docker containers for various things, etc…

I only use virtualized disks for the TrueNAS boot-pool and I have them mirrored. My RAIDZ1 zvol is made up entirely of SSD’s so I decided not to use a SLOG, not convinced I need it but, I have room on the HBA for more SSD’s.