Issues with mounting shares from Truenas Core to Proxmox server

OK @winnielinnie I made a new test dataset and I used cp -a to copy one of my large movie files from my Movies dataset into the test dataset. According to what you’ve told me, this should have used the block-cloning? It took about 10 min to copy the 60GB file to the new dataset. So I think I can say with confidence that I don’t have it working yet.

I’ve been trying to google the answer, but the answers I get are not working for me. The link I cited says to use cp --reflink=auto but that’s not a valid flag for me.

root@Eru[/mnt/Pool_01]# cp --reflink=auto
cp: illegal option -- -
usage: cp [-R [-H | -L | -P]] [-f | -i | -n] [-alpsvx] source_file target_file
       cp [-R [-H | -L | -P]] [-f | -i | -n] [-alpsvx] source_file ... target_directory

So I have to admit that I’m lost here. Maybe I missed a step?

Is any ZFS encryption used in your pool?

Double-check that Core 13.3-U1 is not disabling block-cloning system-wide:

sysctl vfs.zfs.bclone_enabled

A 1 means it is enabled. A 0 means it is disabled.


Those flags only apply for SCALE (Linux) when using Coreutil’s cp. It does not apply to FreeBSD’s cp on Core.

OK, so what are the the commands for Core? Is it the zfs command that initiates the block-cloning?

What about the above two questions regarding the sysctl value and encryption?


Nope. Just a simple:

cp -av /path/to/source/folder /path/to/new/folder

(I included -v so that you can get an idea where the progress currently stands in the list of files being “copied”.)

vfs.zfs.bclone_enabled: 1

So this has been what I was trying to say. I did this exact same command and it does copy, but it does so as if it was a normal copy…a.k.a. it’s very slow. Which is why I asked if maybe I was missing something.

This is what I’m trying to figure out too, since there’s something on your system that seems to bypass block-cloning.

The other possibility is ZFS encryption. (Still not sure if you’re using it.)


What if you test a large file within the same dataset?

Oh sorry, if encryption isn’t part of the default settings then I’m not using it.

Almost instantaneous.

Then if you now confirm with:

zpool list -o name,bcloneused,bclonesaved Pool_01
root@Eru[~]# zpool list -o name,bcloneused,bclonesaved Pool_01
NAME     BCLONE_USED  BCLONE_SAVED
Pool_01            0             0

Almost instantaneous, for a 60-GiB file? On spinning HDDs? But block-cloning was not apparently used?

I know this probably addressed further up in the thread. But can block-cloning be used for dataset-to-dataset file transfer?

Yes, but with some exceptions, notably encryption.

To get a better picture, if you don’t mind. (You can hide/censor private information).

zfs list -r -t filesystem -o name,encryption,recordsize,compression Pool_01
root@Eru[~]# zfs list -r -t filesystem -o name,encryption,recordsize,compression Pool_01
NAME                                                      ENCRYPTION   RECSIZE  COMPRESS
Pool_01                                                   off             128K  lz4
Pool_01/.system                                           off             128K  lz4
Pool_01/.system/configs-81963fc7279b4cf49c43e5a8cbe36cdb  off             128K  lz4
Pool_01/.system/cores                                     off             128K  lz4
Pool_01/.system/rrd-81963fc7279b4cf49c43e5a8cbe36cdb      off             128K  lz4
Pool_01/.system/samba4                                    off             128K  lz4
Pool_01/.system/services                                  off             128K  lz4
Pool_01/.system/syslog-81963fc7279b4cf49c43e5a8cbe36cdb   off             128K  lz4
Pool_01/.system/webui                                     off             128K  lz4
Pool_01/CopyTest                                          off             128K  lz4
Pool_01/CopyTest/CopyTest_Child                           off             128K  lz4
Pool_01/Media                                             off             128K  lz4
Pool_01/Media/Documentaries                               off             128K  lz4
Pool_01/Media/Movies                                      off             128K  lz4
Pool_01/Media/Television                                  off             128K  lz4
Pool_01/Storage                                           off             128K  lz4
Pool_01/Transmission                                      off             128K  lz4
Pool_01/Virtual-Machines                                  off             128K  lz4

Everything looks in order.

Can you try this test:

  1. Change into the test directory with cd /mnt/Pool_01/CopyTest
  2. Create a large 1-GiB file under /mnt/Pool_01/CopyTest/ with dd like this: dd if=/dev/urandom bs=1M count=1024 of=bigfile.dat
  3. Wait a minute, and then explicitly issue the sync command
  4. Copy it into the same folder within the same dataset with cp -va bigfile.dat bigcopy.dat
  5. Check again with zpool list -o name,bcloneused,bclonesaved Pool_01
root@Eru[~]# cd /mnt/Pool_01/CopyTest
root@Eru[/mnt/Pool_01/CopyTest]# dd if=/dev/urandom bs=1M count=1024 of=bigfile.dat
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 1.908813 secs (562518043 bytes/sec)
root@Eru[/mnt/Pool_01/CopyTest]# sync
root@Eru[/mnt/Pool_01/CopyTest]# cp -va bigfile.dat bigcopy.dat
bigfile.dat -> bigcopy.dat
root@Eru[/mnt/Pool_01/CopyTest]# zpool list -o name,bcloneused,bclonesaved Pool_01
NAME     BCLONE_USED  BCLONE_SAVED
Pool_01        1022M         1022M
root@Eru[/mnt/Pool_01/CopyTest]#

BTW, what does sync do?

Forces writes to flush to disk.

That looks good.

Now do steps 4 and 5 again, but this time change the location of where to copy the file in step 4:
cp -va bigfile.dat CopyTest_Child/bigcopy.dat

No need to wait as long or sync or create a new file, since bigfile.dat and all of its blocks are definitely written to disk.

Again, almost instantaneous

root@Eru[/mnt/Pool_01/CopyTest]# cp -va bigfile.dat CopyTest_Child/bigcopy.dat
bigfile.dat -> CopyTest_Child/bigcopy.dat
root@Eru[/mnt/Pool_01/CopyTest]# zpool list -o name,bcloneused,bclonesaved Pool_01
NAME     BCLONE_USED  BCLONE_SAVED
Pool_01        1022M         1022M

That didn’t work though, since the BCLONE values did not change. (Unless you deleted the previous copy?)

Let me start the process over. I did a test cp on my big media file while waiting for your response. I might have mucked up the test.