TrueNAS scale on Proxmox - boot-pool is accessible by host

Hi there,
I successfully installed TrueNAS scale on a Proxmox VM (PVE 8.2.7, Kernel 6.8.12-3). My VM storage is located on a ZFS volume I created with Proxmox.

Now the issue is that every ZFS volume I create with TrueNAS on this or other ZFS volumes managed by the host is also accessible by the host, zpool import returns all of those pools, including the boot-pool of my TrueNAS VM.

I already deactivated zfs-import-scan.service as proposed in this thread. Nevertheless the boot pool is being seen - and it is importable, not flagged as “in use by another system”.

Fortunately up to now there haven’t been any conflicts. I’d like to make sure, though, no unintentional import of my boot-pool will ever happen.

Does anybody have a clue why these zpools are so easily accessible by the Proxmox host in contrast to “normal” virtual disks? I hope there is no conceptual weakness at stake here as I read some more reports of Proxmox accidentally importing ZFS pools… or maybe this is just the way it works and I am over-cautious.

Please let me know :slight_smile:
Thanks!

You MUST isolate the device that controls the disks so that Proxmox cannot see it at all.

You clearly haven’t done that

2 Likes

To expand on this. ZFS is not a clustered filesystem. Accessing from two hosts simultaneously can result in corrupted pools and data loss.

3 Likes

Hi NugentS,
sorry, I should have mentioned that the storage pool goes to an isolated device, meaning an HBA with PCIe passthrough and driver blacklisting. This works all fine and neither disks or pools created on it with TrueNAS can be accessed by the PVE host.

The boot-pool of the TrueNAS VM though resides on the common VM storage which is based on ZFS and has been created in the PVE host GUI. When a VM is created with a disk on this storage, Proxmox creates a partition on it which looks like this:

# lsblk
zd608                           230:608  0     5G  0 disk 
└─zd608p1                       230:609  0     5G  0 part 
zd624                           230:624  0  16.5G  0 disk 
zd640                           230:640  0    75G  0 disk 
├─zd640p1                       230:641  0   100M  0 part 
├─zd640p2                       230:642  0    16M  0 part 
├─zd640p3                       230:643  0  71.8G  0 part 
└─zd640p4                       230:644  0   3.1G  0 part 
zd656                           230:656  0    75G  0 disk 
├─zd656p1                       230:657  0   487M  0 part 
├─zd656p2                       230:658  0     1K  0 part 
└─zd656p5   

Now if I create a Zpool with TrueNAS on one of those disks - it can be seen by the host as well. Mostly they’re flagged with “in use by another system”, but the boot-pool isn’t …

1 Like

You need to isolate the boot-pool as well - otherwise you risk corruption of that as well.

Its perhaps slightly less important as a rebuild is quick and simple (as long as you have a copy of the config)

How would you manage to do that with my current setup?

The only storage I can properly isolate is the HBA - but in that way I can’t use it for VM storage, of course …

sed s/can/will/g

@dustmaster The only way around this would be to use a different type of storage for your boot-pool device such as a QCOW2 on LVM or other non-ZFS solution. Proxmox speaks ZFS, TrueNAS speaks ZFS - if they can see each other’s storage, it can result in problems.

@HoneyBadger Unfortunately possibly the only safe way to go…

Indeed, it seems due to the fact that VM disks on a ZFS host storage are per only option raw format and therefore accessible … Proxmox just creates a new partition on the volume for each disk and fills it 1:1 with its content.

I just find it a bit weird that use cases with ZFS pools created by VMs are that rare. I had the impression that many people use ZFS as storage option on the host so this would be a rather common issue then …

I have given up on Proxmox with ZFS, no snapshot or replication management from the UI, then exactly those idiosyncrasies you observe …

I run Proxmox with XFS for VM images that must be local - like TrueNAS - and use the builtin VM backup tools for scheduled backups to ZFS on TrueNAS. Additional ZFS based storage also TrueNAS.

Just like I had done with ESXi and VMFS.

Much easier to handle and more transparent for me.

1 Like

@pmh Interesting opinion … but what do you mean exactly by “no snapshot or replication management from the UI”? I am using snapshots and the normal PBS backup routine. No replication yet but also this is visible in the UI …

Just to get it right - you run one of the TrueNASs on the PVE host (with passthru HBA?) and the other on a remote machine?

Maybe this did change but I could not find ZFS based backup in the UI while the general support for XFS and LVM is much better.

Yes, one TrueNAS boots from a virtual disk image while the storage pool is two NVMe SSDs in pass through. The network interfaces are also PCIe devices. The MB features four, so one LAGG with VLANs for Proxmox and one for TrueNAS.

allright … I checked and created some disks on an ext4 Volume and created pools on them. One is raw, the other qcow2.

Both are not visible to the host and to zpool import. So it seems the way to go which is a bit of a pain because I’ll have to get 2 new drives to have a RAID1 just for that … while flash memory prices are up in the sky.

I’d recommend XFS over Ext4. Waitaminute … here:

It still is the filesystem with the best scaling behavior, the best concurrency behavior, and the most consistent commit times, which makes it the preferred filesystem for any kind of database usage. This is due to the elimination of several global locks that impair concurrent usage and performance in large filesystems, and due to the consistent use of B±Tree structures with O(log(n)) scaling behavior where before algorithms with worse scaling behavior have been used. The use of extents also allows dynamically growing I/O sizes, benefiting throughput, and together with the novel idea of delayed allocation encourage contiguous file placement.

And as specifically on this platform we are well aware that VM virtual disks share a lot of demands from the underlying block storage with databases, I do think Kristian’s arguments hold for VM storage, too.

So for local storage in a Linux/KVM hypervisor I have come to the conclusion to use XFS.

HTH,
Patrick

2 Likes

Part 3 of what is now a trilogy in five articles. I like that…

Reading notes
#2. 1984. Motorola 68k, SUN SPARC, SGI MIPS. I’ve used these three, and still own two. Am I that old? Yep…

Really nice reading tipp, appreciate that. Not quite the easy lecture :grin:

Respecting your choice I might stay with ext4, though. Not having special performance demands for this boot pool I might just go for convenience …