Import disk no longer available? Alternative over USB 3.1?

Hello,

I hoped to Import a disk in a Dataset like mentioned in documentation of Core 13.0 or Scale 22.12.

Has this feature disappeared in 24.10 please? Is there a replacement that allows an ext4 disk to be mounted and rsynced over USB 3.1 ?

20 TB over 1 GB is painfully slow on my laptop.

Thanks.

Yep, has been removed from Scale, if i remember well is still present in Core but for sure is not mantained.

Will be a long trip, but there are not so many options. Maybe you can use software like Free File Sync to somehow stop/resume/manage the transfer and split it

You can always use the Linux shell to mount the disk and do a rsync.

Are you sure please?

dmesg | grep -i usb

contains the harddisk with and ext4 partition

[    1.952998] hub 2-0:1.0: USB hub found
[    2.288823] usb 2-1: new SuperSpeed USB device number 2 using xhci_hcd
[    2.309996] usb 2-1: New USB device found, idVendor=0bc2, idProduct=2038, bcdDevice=18.02
[    2.310011] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3ut
[    2.310020] usb 2-1: Product: Expansion HDD
[    2.310025] usb 2-1: Manufacturer: Seagate
[    2.310030] usb 2-1: SerialNumber: 00000000NT17DEBP

but

sudo mount -l -t ext4

returns nothing.

what does lsblk say? There should be your disk listed.
Then: mount /dev/sdx /mountpoint

Late reply because I was on “kitchen- and partyduty” :). The ext4 filesystem is mounted. Thank you very much for your help!

Everyone remember why the Import Disk “feature” was removed.

If I recall correctly, their is an ACLs / permissions issue with the copied data. Thus, a normal SMB / Samba share may not be able to access the data. Or worse, everyone would be able to access / delete the data. Basically copying the data inside TrueNAS does not give the ACLs / permissions that the data normally would get from a share copy.

2 Likes

Thanks for your remark. If I’m not mistaken, that problem happens when you try to rsync the permissions. I hope this is not applicable in my case as I only used the -rt flags. After reading your remark and just to make sure, I created tar files from the restored folder and I’m now copying the tar files. I hope decompressing the tar files will make sure all files pass “deblockified” through the system.

It’s a pity TrueNAS doesn’t provide “official” guidance on how to do it as safe as possible. The question pops up quite often here and on Reddit.

They do: do it over the network. That’s the official answer. The problem is that people don’t like that answer.

1 Like

Just for others with the same issue: n the end, I ended up with doing rsync over the network. I had a spare old NUC, installed Ubuntu Server on it, increased MTU size and let it run for 8 days.

I had nothing but issues with the files and folder that were rsynced over USB.
Following the official guidance is still the smartest thing to do, in the end. I should have known better.

2 Likes

A follow up question. After doing rsync over the network, all the issues disappeared. I’m now faced with the task to restructure the data and move it to other datasets. When I looked up how to do this, people recommend tar + cp, mv and mc from a console.

Is it save to do this if I use shares and not /mnt/… please?

Thank you very much.

Using the block cloning feature of ZFS should allow moving files across ZFS Dataset boundaries in minimal time. Not sure how it will work out but if you are talking Terror Bytes, (and yes that spelling was both on purpose and a joke, but also serious), it should be worth investigating.

We probably need a Forum Resource guide in how to tell if ZFS Block Cloning feature is both available, and enabled on the TrueNAS side. Then another section in the same guide to figure out if it is usable & working from the client side.

For example, TrueNAS SCALE Electric Eel 24.10.1 has ZFS Block Cloning available:

root@truenas[~]# strings /etc/version 
24.10.1
root@truenas[~]# zpool get all tank | egrep "NAME|clon"
NAME  PROPERTY                 VALUE               SOURCE
tank  bcloneused               0                   -
tank  bclonesaved              0                   -
tank  bcloneratio              1.00x               -
tank  feature@block_cloning    disabled            local

If I understand it, MS-Windows 11 File Explorer supports server side block cloning during copies over shares, so if needed, a copy could be done first. Then, if good, erase the source.

A simple test should be straight forward and using what I supplied above, verifiable if it worked.

I don’t understand why one would tar the files before copying them if it’s just a copy between datasets.

Thanks. I have to restructure the dataset so I’m afraid that’s not an option unless I can assign subdirectories to datasets.

If it has to pass over the network, it makes sense. If a mv or cp from the console is possible, it doesn’t. I’ve read the newbies guide from @Arwen where he states to use CLI or Webui. I’m careful. :slight_smile: