I want to share pools from my Truenas Core server to my new Proxmox server so I can run Plex on the Proxmox. My share dir structure is this
.
└── Pool/
└── Media/
├── Documentaries
├── Television
└── Movies
I have tried both NFS and SMB shares using both the fstab method and the Proxmox web interface.
I can successfully mount the Media share. I can ls into it and look at my subdirectories shown above. But if I try to ls into any of the subdirectories the shell hangs and eventually times out…or just hangs forever. This happens with both NFS and SMB.
I am at the limits of my knowledge and I am unable to move forward. I’m hoping someone could help me debug this issue or at least point me to where I might get some help.
The only part about your response that I feel I might be violating…if I understand you correctly…is that I am not sharing the Media dataset children (Documentaries, Movies, Music, Television) separately.
But I also have issues accessing content within Storage and Transmission.
So I’m not sure that is my issue. For example, both the Transmission and Storage directories are their own datasets and have their own NFS share. But I am having issues accessing data on either of those on Proxmox. Not only does my terminal hang if I try to even cd into any of the mounts. Simply running an ls on the /mnt directory will cause the terminal to hang. I suspect this is a Proxmox issue. But I haven’t tested that yet. I have a Raspberry Pi I need to dig out to try mounting on.
I don’t have a diagnosis for the unexpected behaviour you’re seeing. But I wonder if you could simplify your setup a bit and potentially solve the issue. Perhaps something like this: On TrueNAS: make a single media dataset with no child, give its full control to a new user/group called let’s say “plxserver”, make it only available to your Plex VM/LXC’s IP, I am assuming they’re all on a local network and firewall allows it. Set Mapall user/group to “plxserver”.
On the Proxmox side: just mount your nfs share on the LXC/Linux VM manually. If it works add to /etc/fstab. Note the nfs version. Then you can make separate directories for your different media types.
When I first read this I was like “no way” because moving all my data scares the shit out of me. But after doing more digging and experimenting, it does seem like having those child datasets is messing with the permissions. So I’m exploring how to safely migrate my media.
Gee, I wonder if this is a prime example of using that newish copy method that creates in essence a hard link, but works across Datasets. Thus, the data its self does not move, just new directory entries are made pointing to the data.
Now I have to go lookup that feature… well here it is;
block_cloning
GUID com.fudosecurity:block_cloning
READ-ONLY COMPATIBLE yes
When this feature is enabled ZFS will use block cloning for operations like copy_file_range(2).
Block cloning allows to create multiple references to a single block. It is much faster than copy-
ing the data (as the actual data is neither read nor written) and takes no additional space. Blocks
can be cloned across datasets under some conditions (like equal recordsize, the same master encryp-
tion key, etc.). ZFS tries its best to clone across datasets including encrypted ones. This is
limited for various (nontrivial) reasons depending on the OS and/or ZFS internals.
This feature becomes active when first block is cloned. When the last cloned block is freed, it
goes back to the enabled state.
As for how to use it, their is supposed to be a “cp” option to use it, I just don’t know it well. This looks like it;
--reflink[=WHEN]
control clone/CoW copies. See below
...
When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when
modified. If this is not possible the copy fails, or if --reflink=auto is specified, fall back to a stan-
dard copy. Use --reflink=never to ensure a standard copy is performed.
That looks easy. Don’t ask me for further help, perhaps someone else can come up with a procedure.
Nothing special is needed with recent versions of cp (“Coreutils”), whether on FreeBSD or Linux. Just use cp without any special flags. (For that matter, --reflink will fail on ZFS, as it is meant to be used with filesystems that support it[1], such as XFS and Btrfs.)
Windows File Explorer, as well as modern Linux[2] distro file managers (Dolphin, Nautilus), support this by default over SMB, whenever you issue a copy operation on the network share.
ZFS doesn’t need “reflink” support, since it uses block-cloning to achieve the same (and technically “better”) result. ↩︎
This assumes you have the SMB shares mounted via the kernel cifs module, which is the recommended method. ↩︎
If you “copy” a 100-GiB file, and block-cloning is used, then no additional space will be consumed on the pool.
You’ll have “pointers” to the same actual blocks of data that comprise the original file and also the new file. Once finished, you can safely delete the old file. The new file will still be pointing to the same blocks of data.
EDIT: There is one caveat. You cannot do “cross-dataset” block-cloning if your datasets are encrypted.
Do you have a link to the documentation that explains how to use block cloning? I can’t find anything on it in the docs. But maybe I’m looking in the wrong place.
No reason. I just don’t update often. All I need is a stable base to stream files. But that has changed since I am making changes. So updating will give me access to block cloning?
Not immediately. You’ll have to “upgrade” your pool in order to make that feature available.
Do understand that a “pool upgrade” will prevent you from importing the pool into an older system. This means that if you want to revert back to Core 13.0-U6.2, you will be unable to import the “upgraded” pool.
There are also two new bugs (regressions) in Core 13.3-U1:
The boot-pool will scrub every day (no matter what you set)
Your storage pools will ignore the “threshold days” setting[1], and thus will scrub every week (instead of once a month) by default
These two bugs will supposedly be fixed in -U2.
Once you upgrade your pool, block-cloning should now be available, and it will work “out of the box” without any special tools needed.
If you’re using dataset encryption, then block-cloning is not supported across datasets. (Only within the same dataset.)
I completely disabled my Scrub Tasks, and will only manually scrub my storage pools until 13.3-U2 is released. ↩︎
My jails are all on FreeBSD 13.4, while TrueNAS Core 13.3 is based on FreeBSD 13.3. I’ve had no issues and am easily able to keep the packages in my jails updated.