Copy speeds within Windows on an SMB share vastly slower than the same copy in shell/SSH

Hi folks. I’m using TN Scale 24.10.1, I have a RAID Z2 SSD pool that I am using for some testing against my ‘production’ spinning rust Core install (ahead of an anticipated migration to Scale, and flash).

I noticed that if I copy a large amount of data in windows, on a Scale SMB share from said flash pool, it takes quite a while - call it an hour for a few hundred GB.

I was curious about this, so I ssh’d into the Scale box, and performed a copy via the command line, and it took a handful of seconds. The data that was copied was immediately browsable/usable within windows after this shell ‘cp’ command. Is this expected behavior? Is there anything I can do to improve this on the Windows side? The file explorer window during the copy event in windows shows varying speeds from 2MB/s to 80MB/s fwiw.

Those read like two completely different tests.

Am I interpreting this correctly? The first test was to copy data from the NAS, over the network, to your Windows PC? The second test was to do a local copy of the same data, within the NAS server itself?


This sounds like “block-cloning” was invoked.

Sorry, let me clarify. Both tests were within the same pool. One test was copying a folder from within the pool to the same dir within the pool in windows (using windows file explorer), not copying to a local windows drive or anything, and the other was the same test, but in the shell within TrueNAS.

So, the data never moved or was copied from the share, all copies were local to the share, but via different methods.

Does this reveal that block-cloning was used after you did the 100-GiB test while logged into SSH?

zpool get feature@block_cloning <nameofpool>

zpool list -o name,bcloneused,bcloneratio,bclonesaved <nameofpool>

Below are the results of those commands. I confess I’m not sure what block-cloning means, so I’ll need more information to interpret them.

zpool get feature@block_cloning ssdpooltest
NAME         PROPERTY               VALUE                  SOURCE
ssdpooltest  feature@block_cloning  active                 local
zpool list -o name,bcloneused,bcloneratio,bclonesaved ssdpooltest
NAME         BCLONE_USED  BCLONE_RATIO  BCLONE_SAVED
ssdpooltest        51.1G         3.00x          102G

Block-cloning[1] was definitely used.

Judging by the output, I wager that you did this “massive copy” two times as a test, while logged into SSH? (It depends on if you deleted the copies at any point.)

Is it possible to delete all the “copies” of this mega folder, wait about a minute, and then post the output to this again?

zpool list -o name,bcloneused,bcloneratio,bclonesaved <nameofpool>

If it shows everything back to zero (or almost zero), redo the test again, but only with Windows. Use a smaller file, maybe something like 512 MiB. Then see if it updates the values of the above command.

If not, then it means Windows File Explorer, over SMB, is somehow not using block-cloning. (It is supported, and it works for me.) The next place I would look at is the SMB Service and SMB Share settings on your TrueNAS server.


  1. Block-cloning is like “reflinking” for ZFS, and it can span across an entire pool, with some exceptions. It was introduced with OpenZFS 2.2. When a (supported) tool issues a “copy” command, it simply writes a pointer to the existing blocks. Unlike a “hardlink”, the result is a fully independent (and modifiable) file, which does not affect the original. ↩︎

Yes, thanks, and sorry for the delay - family things.

Incidentally, a similar phenomenon occurs with deleting files - I initially did this within windows, and it queued up a long process to remove the copied folders. I cancelled that after 30s or so (windows hadn’t even finished computing the number of files to delete yet - and to be fair there are quite a lot in those folders), and removed them via the command line, which took about 10s.

the command you suggested immediately after deleting:

zpool list -o name,bcloneused,bcloneratio,bclonesaved ssdpooltest
NAME         BCLONE_USED  BCLONE_RATIO  BCLONE_SAVED
ssdpooltest            0         1.00x             0

After copying a ~200MB file from only within windows:

zpool list -o name,bcloneused,bcloneratio,bclonesaved ssdpooltest
NAME         BCLONE_USED  BCLONE_RATIO  BCLONE_SAVED
ssdpooltest         204M         2.00x          204M

So, it would appear block cloning is in fact working within windows, but something else is causing these transfers to be quite slow for large / many files. I’m wondering if this is more linked to the large qty of files I’m testing (or tested initially, rather than the gross size of the files).

Out of curiosity, I tried another group of 63 files, about 1.46GB total, and those copied very fast (comparitively) within windows - just by glancing at the windows progress bar around 500MB/s.

Here’s the zpool list command after starting fresh and performing that action:

zpool list -o name,bcloneused,bcloneratio,bclonesaved ssdpooltest
NAME         BCLONE_USED  BCLONE_RATIO  BCLONE_SAVED
ssdpooltest        1.39G         2.00x         1.39G

It is.

This is likely because SMB isn’t nearly as fast at handling very numerous metadata operations, compared to local.

Good to mention the types and number of files next time, rather than the total size. :wink:

1 Like

Got it. Thanks for your help. I’ll just chalk it up to performance capability of SMB vs. the local hardware.

For what it’s worth (and in case it helps folks in the future), the ‘large’ set of data I was testing with consisted of txt, jpg, pdf, various office formats, some CAD files, and probably a bunch of others as well. It was about 30,000 files.

Over SMB? That’ll do it for sure.

Well, I’m a bit unclear about what “over SMB” means. The copies that are the subject of this thread were all performed with the source and destination on the shared truenas dataset.

The only difference was in one case I initiated the copies through windows (slow), and in the other within the truenas shell (fast).

The metadata operations still had to be communicated for all 30,000 files over SMB. (Even though no “data” had to be transferred over the network.)

30,000 operations are a lot of operations to be facilitated via an SMB connection.

Got it. That makes sense. Appreciate the explanations.