Issues with mounting shares from Truenas Core to Proxmox server

I want to share pools from my Truenas Core server to my new Proxmox server so I can run Plex on the Proxmox. My share dir structure is this

.
└── Pool/
    └── Media/
        ├── Documentaries
        ├── Television
        └── Movies

I have tried both NFS and SMB shares using both the fstab method and the Proxmox web interface.

I can successfully mount the Media share. I can ls into it and look at my subdirectories shown above. But if I try to ls into any of the subdirectories the shell hangs and eventually times out…or just hangs forever. This happens with both NFS and SMB.

I am at the limits of my knowledge and I am unable to move forward. I’m hoping someone could help me debug this issue or at least point me to where I might get some help.

Thanks

  • Truenas Core TrueNAS-13.0-U6.2
  • w/Supermicro AOC-STG-I2T
  • Proxmox Virtual Environment 8.3.1
  • w/Intel Ethernet Converged X710-DA2 Network Adapter
  • MikroTik CRS305-1G-4S+in Network Switch

I think I may have some permissions issues that I can’t quite figure out how to resolve.

I can access these directories on Proxmox

james@manwe:/mnt/pve$ ls -l
total 179
d---rwx--- 55 root wheel   56 Dec 24 10:59 Docs
drwxr-xr-x  2 root root  4096 Dec 23 22:36 media
drwxr-xr-x  2 root root  4096 Dec 22 16:02 Media
drwxr-xr-x  2 root root  4096 Dec 22 14:10 Storage
drwxrwxr-x  5 root wheel    5 Dec 24 11:02 Transmission

But these directories I cannot:

EDIT: I can ls into “_saved” for example. But I cannot ls that the contents of that directory.

james@manwe:/mnt/pve/Transmission$ ls -l
total 91
d---rwx--- 8 root wheel 20 Dec 24 04:35 _in-progress
d---rwx--- 7 root wheel  9 Dec  7 22:04 _saved
d---rwx--- 3 root wheel  3 Dec 24 11:02 template

When using NFS, you can’t cross file system boundaries in the share.

This is a complicated way to say that each dataset needs to be shared and mounted individually.

I don’t quite understand what you mean by that.

For clarity here are my datasets. Notice that Media has its own datasets

And here is my NFS share configs.

The only part about your response that I feel I might be violating…if I understand you correctly…is that I am not sharing the Media dataset children (Documentaries, Movies, Music, Television) separately.

But I also have issues accessing content within Storage and Transmission.

Yes.

You need to share those separately. Accessing the share of the parent file system does not give you access to child file systems.

Worse, you need to mount them separately on the client. This is standard NFS behavior.

1 Like

So I’m not sure that is my issue. For example, both the Transmission and Storage directories are their own datasets and have their own NFS share. But I am having issues accessing data on either of those on Proxmox. Not only does my terminal hang if I try to even cd into any of the mounts. Simply running an ls on the /mnt directory will cause the terminal to hang. I suspect this is a Proxmox issue. But I haven’t tested that yet. I have a Raspberry Pi I need to dig out to try mounting on.

I don’t have a diagnosis for the unexpected behaviour you’re seeing. But I wonder if you could simplify your setup a bit and potentially solve the issue. Perhaps something like this: On TrueNAS: make a single media dataset with no child, give its full control to a new user/group called let’s say “plxserver”, make it only available to your Plex VM/LXC’s IP, I am assuming they’re all on a local network and firewall allows it. Set Mapall user/group to “plxserver”.

On the Proxmox side: just mount your nfs share on the LXC/Linux VM manually. If it works add to /etc/fstab. Note the nfs version. Then you can make separate directories for your different media types.

When I first read this I was like “no way” because moving all my data scares the shit out of me. But after doing more digging and experimenting, it does seem like having those child datasets is messing with the permissions. So I’m exploring how to safely migrate my media.

Gee, I wonder if this is a prime example of using that newish copy method that creates in essence a hard link, but works across Datasets. Thus, the data its self does not move, just new directory entries are made pointing to the data.

Now I have to go lookup that feature… well here it is;

       block_cloning
               GUID                  com.fudosecurity:block_cloning
               READ-ONLY COMPATIBLE  yes

               When this feature is enabled ZFS will use block  cloning  for  operations  like  copy_file_range(2).
               Block  cloning allows to create multiple references to a single block.  It is much faster than copy-
               ing the data (as the actual data is neither read nor written) and takes no additional space.  Blocks
               can be cloned across datasets under some conditions (like equal recordsize, the same master  encryp-
               tion  key,  etc.).   ZFS  tries its best to clone across datasets including encrypted ones.  This is
               limited for various (nontrivial) reasons depending on the OS and/or ZFS internals.

               This feature becomes active when first block is cloned.  When the last cloned  block  is  freed,  it
               goes back to the enabled state.

As for how to use it, their is supposed to be a “cp” option to use it, I just don’t know it well. This looks like it;

       --reflink[=WHEN]
              control clone/CoW copies. See below
...
       When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when
       modified.   If  this is not possible the copy fails, or if --reflink=auto is specified, fall back to a stan-
       dard copy.  Use --reflink=never to ensure a standard copy is performed.

That looks easy. Don’t ask me for further help, perhaps someone else can come up with a procedure.

Nothing special is needed with recent versions of cp (“Coreutils”), whether on FreeBSD or Linux. Just use cp without any special flags. (For that matter, --reflink will fail on ZFS, as it is meant to be used with filesystems that support it[1], such as XFS and Btrfs.)

Windows File Explorer, as well as modern Linux[2] distro file managers (Dolphin, Nautilus), support this by default over SMB, whenever you issue a copy operation on the network share.


  1. ZFS doesn’t need “reflink” support, since it uses block-cloning to achieve the same (and technically “better”) result. ↩︎

  2. This assumes you have the SMB shares mounted via the kernel cifs module, which is the recommended method. ↩︎

I don’t really want a copy though. I need to move files…a lot of files. So I don’t have enough space to do a copy from the source.

So for example:

└── Pool_001/
└── Media (dataset)/
├── Documentaries (dataset)
├── Television (dataset)
└── Movies (dataset)

Should become:

└── Pool_001/
└── Media (dataset)/
├── documentaries (directory)
├── television (directory)
└── movies (directory)

Block-cloning doesn’t use up any extra space.

If you “copy” a 100-GiB file, and block-cloning is used, then no additional space will be consumed on the pool.

You’ll have “pointers” to the same actual blocks of data that comprise the original file and also the new file. Once finished, you can safely delete the old file. The new file will still be pointing to the same blocks of data.

EDIT: There is one caveat. You cannot do “cross-dataset” block-cloning if your datasets are encrypted.

Never mind. This version of ZFS does not support block-cloning.

I joined this thread late.

Do you have a link to the documentation that explains how to use block cloning? I can’t find anything on it in the docs. But maybe I’m looking in the wrong place.

Nards

Is there a reason you cannot (or do not want to) upgrade to Core 13.3-U1?

No reason. I just don’t update often. All I need is a stable base to stream files. But that has changed since I am making changes. So updating will give me access to block cloning?

Not immediately. You’ll have to “upgrade” your pool in order to make that feature available.

:warning: Do understand that a “pool upgrade” will prevent you from importing the pool into an older system. This means that if you want to revert back to Core 13.0-U6.2, you will be unable to import the “upgraded” pool.


There are also two new bugs (regressions) in Core 13.3-U1:

  1. The boot-pool will scrub every day (no matter what you set)
  2. Your storage pools will ignore the “threshold days” setting[1], and thus will scrub every week (instead of once a month) by default

These two bugs will supposedly be fixed in -U2.


Once you upgrade your pool, block-cloning should now be available, and it will work “out of the box” without any special tools needed.

:information_source: If you’re using dataset encryption, then block-cloning is not supported across datasets. (Only within the same dataset.)


  1. I completely disabled my Scrub Tasks, and will only manually scrub my storage pools until 13.3-U2 is released. ↩︎

How do I even upgrade to 13.3? I don’t see that option in the Dashboard. Only the latest version of 13.0 and the Scale upgrades.

Also, will upgrading to 13.3 break my jails?

My jails are all on FreeBSD 13.4, while TrueNAS Core 13.3 is based on FreeBSD 13.3. I’ve had no issues and am easily able to keep the packages in my jails updated.

Unless, that is, you’re using “Plugins”?


You have to download the “Manual Update” file.