SOLVED: Can't add USB-connected HD's ZFS dataset or pool as share for SMB or NFS

I have a ZFS pool, kyoudaiHD, that is mounted via USB-direct-passthrough on my TrueNAS. It shows up as expected. However, any attempt to mount a dataset or pool as a share fails with the error at the bottom.

Both the GUI and the middleware (midclt) list the device correctly. I can see the device in storage, and the pool and datasets are seen in datasets on the GUI. It is mounted at /mnt/giant and normal ls commands see the contents and zfs and zpool return the right results.

KyoudaiHD is a pool on a USB-based 4-HDD bay configured as a ZFS Raid1 virtual device. It works on a variety of machines and the TrueNAS Scale 24.10.1 guest (running on ProxMox 8.3.2 on a Dell Poweredge 640) can see it.

I passed thru the USB ATA-to-ATAPI bridge and the Mass Storage Function on the Proxmox host to the TrueNAS guest. I don’t think that is the issue per se because everything else works, but maybe there is something about the way that the USB-based ZFS pool gets passed that prevents TrueNAS from seeing it as a mount point? Maybe it is the fact that it is USB-based (Not so much that it is passed thru?)

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 211, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1529, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1460, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 230, in create
    return await self.middleware._call(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1460, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 261, in nf
    rv = await func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/nfs.py", line 516, in do_create
    await self.validate(data, "sharingnfs_create", verrors)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/nfs.py", line 638, in validate
    verrors.check()
  File "/usr/lib/python3/dist-packages/middlewared/service_exception.py", line 72, in check
    raise self
middlewared.service_exception.ValidationErrors: [EINVAL] sharingnfs_create.path: The path must reside within a pool mount point


Not maybe, it 100% is the problem. Your build and setup are a blueprint for how NOT to set up TrueNAS. About the only thing worse would be if the Proxmox host was some cheap aging laptop that you are trying to repurpose. I hope you don’t have any valuable data on this setup as it is destined for failure as you are witnessing.

If you insist on running TrueNAS in a VM then get a proper HBA to hook those drives to end get them out of that USB enclosure. Also make sure that you have properly passed through the HBA controller to the TrueNAS VM and that nothing else has access to it.

2 Likes

I bolded the parts that stand out to me. You say your pool is called kyoudaiHD and then you say it’s mounted at a path that isn’t /mnt/kyoudaiHD.

That may be a problem as TrueNAS expects things to be done in a certain way. Using the GUI will get you there, you appear to have done some finagling in the shell to achieve whatever you have here.

And then there’s the USB enclosure and Proxmox bits I’m not even going to bother going into…

1 Like

Changing the mountpoint value for the pool to /mnt/kyoudaiHD solved the problem. Apparently, as neofusion pointed out, TrueNAS expects the mountpoint name to be the same as the pool name and if it is not, throws the above error. Changing the mountpoint under /mnt to match the pool name solved the problem.

This is a home lab test environment for me. I have some docs and pics on the share, but I also have a cloud backup and another physical HD backup of them.

I appreciate the warnings about the risk of a USB-connected ZFS drive. My initial googles on the topic of Proxmox and ZFS didn’t kick up anything. I have since read:

When I get the chance to upgrade, I will look at doing so.

The posts about not using TrueNAS as a guest in Proxmox frankly caught me by surprise. When I googled TrueNAS and Proxmox, everything I found was positive:

I’ll see how I fare with my setup and adjust accordingly. This is a learning experience, so a functional box is a real goal, but if I learn a few things on the way great and if I make a few mistakes, so be it.

Anyway, thanks for solving this issue.

Obitori

1 Like

I think you’re misreading the replies; none of them are saying not to use TrueNAS as a guest OS under Proxox. But when running TrueNAS as a VM under any hypervisor, there are some specific requirements, most notably:

  • Make sure TrueNAS has direct access to its storage devices–this means passing through the storage controller to the guest OS, and
  • Make sure nothing else has access to those devices. This is the one that’s specifically relevant to Proxmox, as ESXi and Xen/xcg-ng aren’t ZFS-aware. But Proxmox is, and can try to do things with your storage pool if you don’t make it ignore your storage controller.

AIUI, if you take those two precautions, TrueNAS is quite safe under Proxmox. ESXi is still a more mature platform, though considerably less desirable now that a free license is no longer available.

3 Likes

You are right. I misread. Thanks for clarifying and the additional advice.