Server-Side copy extremely slow using SMB

Hey all,
Currently in the processing of setting up my TrueNas system and testing every nook and cranny whilst waiting on drives but can’t seem to figure out this weird issue:

If a folder is copied between datasets server-side via SMB the transfer time is ridiculously slow (6-11 times slower!) compared to an intial SMB copy from a client or a Midnight Commander or CLI “cp” copy either cached or uncached. If the intial SMB copy from client has been cached, copying again will also write extremely slowly via SMB.

I’ve got a bunch of disk usage graphs from transferring the exact same 7GB folder below:

SMB copy from client to server (first time, so the folder is uncached), 1min 40

SMB copy from client to server (second time, so the folder is cached. Confirmed by no network activity), 15mins

Midnight Commander server-side copy uncached, 2mins 40

Midnight Command server-side copy cached, 1min 20


Below data is copying different collection of 15.2GB folder from above:

SMB copy from client to server (first time, so the folder is uncached), 3mins 5

SMB copy from client to server (second time, so the folder is cached, confirmed with no network activity), Ran for 8 mins before cancelling

SMB server-side copy uncached, 26mins

SMB server-side copy cached, 24 mins

NAS Specs:

  • ElectricEel-24.10.1
  • i7 7700 w/ iGPU
  • 32GB DDR4 2400mhz non-ECC
  • 160gb SSD Boot pool - Motherboard SATA
  • Mirrored 1TB pool (system dataset pool)
    • 12TB Exos X16 (CMR) - Motherboard SATA
    • 1TB WD Red WD10EFRX (CMR) - Motherboard SATA
  • Corsair RM650x

SMB Client is Windows 10 AMD CPU gaming PC

Misc:

  • It seems it slows down mainly when hitting tiny files (5-800KB), dropping to 5-15MB/s
  • Pool datasets are key encrypted so no block cloning available
  • 128KB record size (Tried 1MB but same story)
  • SMB setup with NFSv4 perms with restricted mode
  • Dedupe off
  • LZ4 compression
  • All drive SMART is great, Scrubs come back fine too
  • 1Gigabit motherboard NIC and SMB client runs full speed during iperf3 and first time, uncached, SMB writes or any reads

I am currently running badblocks on another 12TB Exos (hence the crazy pool layout currently) so can replace the 1TB soon and see if it’s the odd mismatch causing issues but seeing as it works fine through MC and initial SMB writes I am not hopeful.

Any help would be greatly appreciated and I’m more than happy to try out any suggestions! This is my first foray into Linux so please be verbose :wink:

Run Iperf3 tests before speding anymore time on midnight commander.

Also, dump MC and use Windows native copy for testing

1 Like

iperf3 maxes out my gigabit connection, as do regular inital SMB writes. It’s only when the write is cached (and the network is out of the equation) that there’s an issue.

I only tried MC because I noticed the issue using the Windows native copy.

Adding in some more tests for the 7GB folder:
The CP command is unaffected by this ?bug? so it’s purely just SMB

CP command server-side copy uncached, 2mins 17

CP command server-side copy cached, 1min

Other attempts:

  • NFSv4 ACL mode on passthrough made no change
  • POSIX perms made no change
  • Other SMB clients made no change (Win 10 laptop, MacOS laptop, Android phone)

You’re probably incurring the cost of opening, issuing fsctl, and closing file over the network. That also incurs a cost and by definition can’t be server-side. If you have a directory with 80K files, the client needs to minimally

  1. issue request to open file
  2. possibly issue additional requests to get some file metadata
  3. issue fsctl
  4. verify things were right
  5. close file

This all has cost in terms of time / network packets and server load (without looking into particulars of your configuration). TL;DR it will never be as fast as a local copy.

1 Like

From another thread, it seems that SMB isn’t that great when it comes to very high counts of metadata operations.

Not even block-cloning can overcome this limitation.

Ah okay that makes some more sense, thanks for the explanations and thread link. Missed that one, my bad.

This was transferring 4,000 files so definitely a few metadata operations.
I tried a similar sized single movie file and it worked fine so yep it’s the individual file operations.
Also tried 1,000 image files ~5MB each and it was slow over SMB so I’ll stick to using the File Browser container when moving lots of files.

To better my understanding of SMB, how come the multiple file operations aren’t an issue when initially copying from a client to the NAS? Is it because the NAS is initially only writing metadata but then it needs to read and write metadata so it doubles the SMB operations?

Is there anything that could be done to reduce the SMB overhead or this is just in its nature?

Thanks all for your input!

Pretty much this. Said earlier in a more technical way:


I don’t believe much can be done.