Yep, has been removed from Scale, if i remember well is still present in Core but for sure is not mantained.
Will be a long trip, but there are not so many options. Maybe you can use software like Free File Sync to somehow stop/resume/manage the transfer and split it
[ 1.952998] hub 2-0:1.0: USB hub found
[ 2.288823] usb 2-1: new SuperSpeed USB device number 2 using xhci_hcd
[ 2.309996] usb 2-1: New USB device found, idVendor=0bc2, idProduct=2038, bcdDevice=18.02
[ 2.310011] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3ut
[ 2.310020] usb 2-1: Product: Expansion HDD
[ 2.310025] usb 2-1: Manufacturer: Seagate
[ 2.310030] usb 2-1: SerialNumber: 00000000NT17DEBP
Everyone remember why the Import Disk “feature” was removed.
If I recall correctly, their is an ACLs / permissions issue with the copied data. Thus, a normal SMB / Samba share may not be able to access the data. Or worse, everyone would be able to access / delete the data. Basically copying the data inside TrueNAS does not give the ACLs / permissions that the data normally would get from a share copy.
Thanks for your remark. If I’m not mistaken, that problem happens when you try to rsync the permissions. I hope this is not applicable in my case as I only used the -rt flags. After reading your remark and just to make sure, I created tar files from the restored folder and I’m now copying the tar files. I hope decompressing the tar files will make sure all files pass “deblockified” through the system.
It’s a pity TrueNAS doesn’t provide “official” guidance on how to do it as safe as possible. The question pops up quite often here and on Reddit.
Just for others with the same issue: n the end, I ended up with doing rsync over the network. I had a spare old NUC, installed Ubuntu Server on it, increased MTU size and let it run for 8 days.
I had nothing but issues with the files and folder that were rsynced over USB.
Following the official guidance is still the smartest thing to do, in the end. I should have known better.
A follow up question. After doing rsync over the network, all the issues disappeared. I’m now faced with the task to restructure the data and move it to other datasets. When I looked up how to do this, people recommend tar + cp, mv and mc from a console.
Is it save to do this if I use shares and not /mnt/… please?
Using the block cloning feature of ZFS should allow moving files across ZFS Dataset boundaries in minimal time. Not sure how it will work out but if you are talking Terror Bytes, (and yes that spelling was both on purpose and a joke, but also serious), it should be worth investigating.
We probably need a Forum Resource guide in how to tell if ZFS Block Cloning feature is both available, and enabled on the TrueNAS side. Then another section in the same guide to figure out if it is usable & working from the client side.
For example, TrueNAS SCALE Electric Eel 24.10.1 has ZFS Block Cloning available:
root@truenas[~]# strings /etc/version
24.10.1
root@truenas[~]# zpool get all tank | egrep "NAME|clon"
NAME PROPERTY VALUE SOURCE
tank bcloneused 0 -
tank bclonesaved 0 -
tank bcloneratio 1.00x -
tank feature@block_cloning disabled local
If I understand it, MS-Windows 11 File Explorer supports server side block cloning during copies over shares, so if needed, a copy could be done first. Then, if good, erase the source.
A simple test should be straight forward and using what I supplied above, verifiable if it worked.
If it has to pass over the network, it makes sense. If a mv or cp from the console is possible, it doesn’t. I’ve read the newbies guide from @Arwen where he states to use CLI or Webui. I’m careful.