Moving data between datasets

Hi everyone,

I have a large amount of data that I want to move between datasets in the same pool to re-organize, in the past I just copied and pasted data from the SMB shares in windows, but that is just too slow.

I tried using rsync -avP /source /destination, but it creates copies that are inaccessible through my SMB share, though inspecting the dir with ls -ld displays the same permissions as other accessible folders in that dir.

I tried using mv, but I don’t like that it doesn’t show the progress bar, and it froze in the middle of the copy which was annoying.

The dataset I’m moving from is encrypted and unlocked, and I’m moving it to an unencrypted dataset, not sure if this has anything to do with it.

If anyone can steer me in the correct direction, I’d appreciate it. If any additional detail is needed, lmk.

Just create a replication task to move the data between the datasets. Source Location> on this system and destination location > on this system. I have never had to do this but I think that would be the easiest way.

I would but the I’m trying to move data from specific sub folders from the source dataset, and not the entire dataset, which doesn’t work with the rep task, it only lets you select the full dataset. Also it requires a snapshot

Rsync will be your next option then.

I tried using rsync -avP /source /destination , but it creates copies that are inaccessible through my SMB share, though inspecting the dir with ls -ld displays the same permissions as other accessible folders in that dir.

Midnight Commander in shell?

Well.., you did say moving data between datasets in your post and title. Not that you wish to copy/move a directory of files within a dataset to some other directory. You can have multiple datasets nested and sync those individually.

For rsync you can try:

rsync -avh /path/to/source/folder /path/to/destination/

you must use /mnt/poolname/ in the paths above or it will not work like this:.
rsync -avh /mnt/poolname/path/to/source/folder /mnt/poolname/path/to/destination/folder/


* `-a`: Archive mode (preserves permissions, times, etc.).
* `-v`: Verbose (shows files being copied).
* `-z`: Compress data during transfer (useful over network not needed for local).
* `-h`: Human-readable numbers.
* `-P`: Shows progress

cp command:
I would again ssh into the server and start a tmux session then do your copy command. The tmux session will keep the copy working in the server background if the ssh session gets disconnected from the server. You can reopen it by using tmux attach-session -t

You could also use cp (copy) from the command line. Start a ssh and tmux session then do your copy command. It is much faster than mv (move) mv has to also delete the file. The tmux session will keep the copy working in the server background if the ssh session gets disconnected from the server. You can reopen it by using tmux attach-session -t

Why use tmux? it is good for long running sessions that may become disconected. That might have happened if you used the shell command line in Truenas. It will time out or be closed.

If either your source or destination is a ZFS dataset (not just a folder), cp or mv might not work as expected. A ZFS replication or rsync to a dataset directory would be required I think.

full paths including the (/mnt/poolname/) must be used in most things like rsync or they won’t work

Thank you so much for the detailed reply! I will test this out today when I get back home.

Rsync won’t bclone files and so will be significantly less efficient than a reflink copy.