Truenas-scale with Proxmox

I’m configuring TrueNas (TNs) Scale 24.x with Proxmox (PM) 8.2.2. It seems PM offers many options here and so I’m trying to find my way to what is recommended as well as potentially a sensible storage configuration.

  1. I’m not sure which storage option to use with TNs iscsi.

  2. It seems I could configure storage on the TNs side so each zvol houses one vm. This seems to get very cluttered fast on the PM side. Alternatively, a large zvol could be shared with LVM on PM between multiple VMs. This seems less cluttered and recommended elsewhere in various forums.

However, I see the PM guide states,

On block level storage, the underlying storage layer provides block devices (similar to actual disks) which are used for disk images. Functionality like snapshots are provided by the storage layer itself. Examples: ZFS, Ceph RBD, thin LVM, …

So it seems with a large zvol I give up granular control over snapshots at the VM level?
Just to add to my confusion - each vm on the PM side has a snapshot option.

What might be a sound approach here and how are snapshots best managed?
Thanks

I pass an HBA through to the TrueNAS VM and let TrueNAS handle the disks.

So TrueNAS has direct access to the HBA and the disks as if it was a baremetal install.

Only for the TrueNAS boot-disk I use a small proxmox disk because nothing important is stored there and not snapshots, etc. are needed (make sure you always export the TrueNAS config tho)

Perhaps I’m misunderstanding something here . . .
My configuration might be like this:
Pool 1 Disks 4 x 1 TB NVME (striped mirror) on Truenas as iSCSI zvol for VM on Proxmox
Pool 2 Disks 8 x 4 TB (spinning disks as striped mirror) on TrueNas as iSCSI one/more (? - this is the question) for Proxmox
Pool 3 Disks 4 x 4 TB (spinning disks - raidz - datastore on the NAS
Pool 4 Disks 4 x 4 TB (spinning disks - raidz - datastore . … )
The HBA on the Truenas side (passthrough / IT mode) connects to a JBOD.

What I’m really unsure about is whether to configure 1 zvol for each VM or have one large Zvol and allow PM to divide that into volumes for each VM.
If I have one large pool will PM be aware of snapshots?

I think it’s more a lack of clarity on what you’re trying to do. Chris (apparently) understood that you were wanting to run TrueNAS as a VM on Proxmox. From your last post, it sounds like these are two separate systems, and you want to use your NAS as storage for your PVE node/cluster. Is that correct?

2 Likes

I wasn’t sure either what your question was.
It seems that the whole issue is ZFS terminology. Zvols, like datasets, are logical storage; you can create as many as you want in a pool.

Given the performance difference, these are going to be two pools, one for the VMS which need the most performance, and one for the rest.

Unless you have a compelling reason to physically separate dastores, this could be two vdevs in the same pool, or rather one single 8-wide raidz2 vdev (and pool).

Yes, that is right. Sorry for the lack of clarity.
My previous setup was with Truenas (data storage) and Esxi (VMs). The two were connected with direct links to share one pool over iSCSI from the TN side. I managed snapshots on the Esxi side (per VM).

Under my new setup Proxmox is taking over the Esxi role to host VMs. My problem is handling snapshots. I’m not very sophisticated on this – I generally just use snapshots as a means of roll-back as needed on a VM during updates / changes.

It seems I could follow two strategies here:

  1. 1 Zvol, 1 VM . . . and so on. Somewhat messy as VM numbers increase. It seems I can handle snapshots on the TN side without PM awareness?
  2. 1 larger Zvol to house 1 to n VMs (use LVM on Proxmox). I’m not clear if I can handle snapshots on the PM side (a correct strategy?). Nor am I clear, if I follow this strategy and snapshot on the TN side whether PM will be aware and so create problems? If I snapshot on the TN side - then all VMs housed in the Zvol will be affected.

Sorry - naive questions, I’m sure. The main goal here is to handle snapshots in simple way that is not going to create problems with the VMs.

Thanks