Oh, THAT’s why I didn’t update. I just didn’t want to bother. I’ll have to have a think on this. Moving 70TiB of data will take so damn long even with rsync, upgrading is probably the smart thing to do.
Gee, I missed that the original poster’s TrueNAS version did not support block cloning. Then set off a fire storm of responses… oh, well seems like it might be a reasonable approach anyway after upgrading.
In theory, you can enable just the required feature:
zpool set feature@block_cloning=enabled Pool_01
During cloning, the feature will be “active”.
Whence all the files have been “cloned”, and the source directory entries removed, it may be possible to revert back. The feature should say “enabled” again. In which case you can “disabled” it again. Thus, preserving the ability to boot older TrueNAS versions.
Not saying for certain that will work. But, it’s harmless to try, as in if it does not work, no data is lost, just can’t boot older TrueNAS versions.
I’d also use different (temporary) names for the folders. You don’t want child datasets to overlap their names with existing folders.
I’m not following, so you mean this feature might be available on my current Core version?
Whether you do a full pool “upgrade” or only enable a specific pool feature, the version of ZFS still needs to support it.
No.
If you choose this block cloning method, you will need to upgrade to a TrueNAS / ZFS version that supports block cloning.
What I am saying, is it may be possible after the data is moved, to go back to the version you are using now. (Note the word “POSSIBLE”… not CERTAINTY.)
Oh, I don’t think I’ll want to revert. I don’t have anything against going to 13.3. I just had no reason to upgrade until now.
OK, so I successfully upgraded. @Arwen & @winnielinnie, would you mind giving me a bit of guidance on how to enable and perform block cloning? I don’t need you to hold my hand, but a bit of guidance would help.
Disclaimer: This will not work if you are using encryption.
To enable it, do one of the following.
To only enable this one feature:
Alternatively, to upgrade the pool, which will enable all features:
zpool upgrade Pool_01
If you have no plans of going back, you might as well just upgrade the pool.
To perform it is simply a matter of using the cp
command on the server, which will (should) automatically invoke block-cloning.[1] To use cp
in “archive mode”, which will act recursively and preserve permissions and timestamps, you can use the -a
flag.
Be careful not to overlap dataset names with folder names when you do this. It would be safer to create new folders with "_new"
at the end of the name, and then after the “copying” is finished, you can safely destroy the datasets[2], and then name the folders back to their desired names.
I would try a “test” first, with a very large file, to make sure that block-cloning is being used. After “copying” a very large file, you can check with this command:
zpool list -o name,bcloneused,bclonesaved,bcloneratio Pool_01
↩︎I highly recommend you create a pool checkpoint right before this step, since if you accidentally delete datasets that you realize you didn’t meant to, or because something went “wrong”, you won’t be able to rewind the pool back to a previous state before the deletions. ↩︎
Apart from anything else, the “wheel group” does not exist in Proxmox. You nee to make use of the “advanced” NFS share options in TN to get the maproot and maproot group correct.
E.G.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Dec 27 17:03:52 2024 from 192.168.0.202
root@pvevm2:~# getent group wheel
root@pvevm2:~# nfsstat -m
/mnt/pve/HPG8SCRATCH from 192.168.0.99:/mnt/Tpool/scratch
Flags: rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.111,local_lock=none,addr=192.168.0.99
root@pvevm2:~# ls -l /mnt/pve/HPG8SCRATCH
total 6508702
drwxr-xr-x 2 root root 7 Dec 7 10:59 dump
drwxr-xr-x 2 root root 2 Nov 3 2023 images
drwxr-xr-x 2 root root 2 Sep 20 2023 private
-rw------- 1 root root 21478375424 Dec 4 14:00 vol.qcow2
root@pvevm2:~#
TrueNAS CORE export:
TrueNASB# cat /etc/exports
V4: / -sec=sys
/mnt/Tpool/scratch -maproot="root":"wheel" -sec=sys -network 192.168.0.0/24
TrueNASB
It’s clear you not trying to access any NFS shares on PVE by adding it to “Storage”. Supposing you get the mount correct in PVE and are intending to add this to /etc/fstab or create systemd mpunts on PVE? If so, what next? How will Plex run on PVE and how will it access these mounted TN shares?
I am learning the process. But my first step is to successfully mount my relevant Truenas drives to proxmox so that I can faithfully access them from that server. Then I can sort out the “how am I going to use this with Plex”.
Well, I guess it’s a choice of running plex in either an LXC or VM on PVE if you decided not to run plex on your TN server. For LXC you’d bind mount storage to the container, with a VM you’d used NFS mounts within the VM itself.
It’s either file storage (NFS) or block storage (iscsi) between TN or PVE over your 10Gbe network. If you used iscsi on TN that tuns into LVM storage on PVE which used in the same way as if it local LVM storage. IIRC, iscsi TN CORE to PVE is temperamental but SCALE is not.
In the case of NFS share from TN to PVE you’ll have to decide if giving a single dataset to PVE and letting the PLEX app create all the necessary dir/files on the single share is a better route to take.
My plan is to replicate my jails (Plex, Transmission, Sonarr) as containers in Proxmox. I’m assuming that’s what a LXC is? Sorry, but I’m not 100% on all the acronyms. I kind of assumed that these mounts in PVE would not work directly with this containers. But right now I am just trying to verify that I can get a solid mount of my Truenas drives. Once I am confident of that I can move forward.
I also do plan on having a VM as my main development environment. I would like the same drives to be mounted in that as well so I can do dev work with them.
OK so, are you sure cp
works as you stated? This seems to act as a normal linux copy command and it takes about the same amount of time and still leaves behind the original file. After digging into this I came across the zfs clone
command. But that just clones the dataset to a another location. I want to take my current child datasets under Media (Movies, Documentaries, Television) and make the regular directories under the Media dataset.
You’re now on Core 13.3 with OpenZFS version 2.2.4?
Check the ZFS version:
zfs --version
Check the pool feature:
zpool get feature@block_cloning Pool_01
As it should. Even though block-cloning is invoked (in the back end) when you issue cp -a
, it will not remove the original file. (It better not! It’s cp
after all, and not mv
, which is destructive.)
The reason for using cp
is to safely copy everything over to their new locations, yet without consuming any extra space (which you said you don’t have enough spare room), since it will use block-cloning. Then later you can destroy the no longer needed datasets.
Don’t be fooled by the presence of the original files or the dashboard’s space usage reporting.
To see the true pool usage, use the command-line:
zpool list Pool_01
To see your savings from block-cloning:
zpool list -o name,bclonesaved,bcloneused Pool_01
Don’t forget this important step:
zfs-2.2.4-1
zfs-kmod-v2024052400-zfs_e4631d089
NAME PROPERTY VALUE SOURCE
Pool_01 feature@block_cloning enabled local
OK so the fact that I can see and interact with my test file in both locations is an indication it didn’t work properly?
How do I accomplish this? I’m not very familiar with this level of management on my Truenas server. For now I’m doing all this on temp datasets that I created so I become familiar with the process.
Enabled but not active… after you did a successful cp
, but before you deleted the test copy?
It’s not indicative either way. ZFS blocks (and if they are shared between multiple files) is not known or understood by anything outside of low-level ZFS.
One way to infer is to check with this a few seconds or so after issuing a cp
on a large test file:
zpool list -o name,bclonesaved,bcloneused Pool_01
Before you undergo a risky operation:
zpool checkpoint Pool_01
After you’re done and have no need to rewind to a checkpoint:
zpool checkpoint -d Pool_01
I go into more detail in this thread.
Honestly I don’t recall. I ran the zpool command earlier today and tried a test copy and it didn’t seem to go any faster. So when I finally got time this evening I did some searching and came across the zfs clone commands so I just tried those and I did get a clone. But it cloned my test Movies directory into ShareTest as its own dataset and the old file is still there. Although the dataset shows its size as empty.
I’ve been putting out fires at work all day and I need to do this with a clear head. So I’m going to blow away my test datasets and start fresh and try the cp
command again tomorrow morning.
But if you could clarify one thing for me. With this block-clone process, will I be able to end up with this file structure?
From....
├── Pool_01/
└── Media/ (dataset)/
├── Documentaries/ (dataset)
├── Movies/ (dataset)
└── Television/ (dataset)
To.....
└── Pool_01/
└── Media/ (dataset)
├── Documentaries/ (directory)
├── Movies/ (directory)
└── Television/ (directory)
EDIT:
I also do realize that I will need to make new directories with different names. I just used the same names for illustration purposes.
Yes.
Take your time.
CREATE POOL CHECKPOINT
1...
├── Pool_01
└── Media (dataset)
├── Documentaries (dataset)
├── Movies (dataset)
└── Television (dataset)
CREATE FOLDERS WITH "_new"
2...
├── Pool_01
└── Media (dataset)
├── Documentaries (dataset)
├── Documentaries_new (empty folder)
├── Movies (dataset)
├── Movies_new (empty folder)
├── Television (dataset)
└── Television_new (empty folder)
COPY EVERYTHING FROM EACH DATASET TO EACH "_new" FOLDER
3...
├── Pool_01
└── Media (dataset)
├── Documentaries (dataset)
├── Documentaries_new (folder with all data)
├── Movies (dataset)
├── Movies_new (folder with all data)
├── Television (dataset)
└── Television_new (folder with all data)
CONFIRM SAFE COPY OPERATIONS, THEN DELETE OLD DATASETS
4...
├── Pool_01
└── Media (dataset)
├── Documentaries_new (folder with all data)
├── Movies_new (folder with all data)
└── Television_new (folder with all data)
RENAME FOLDERS TO REMOVE "_new" AND DISCARD CHECKPOINT
Finally...
├── Pool_01
└── Media (dataset)
├── Documentaries (folder with all data)
├── Movies (folder with all data)
└── Television (folder with all data)