When TrueNAS is running in PVE, SMB performs poorly on 10G NICs

When TrueNAS is running in PVE, SMB performs poorly under the 10G network card.

The following output can be run with the TrueNAS 10G network card through iperf 3:

iperf3.exe -c 192.168.100.3
Connecting to host 192.168.100.3, port 5201
[ 5] local 192.168.100.198 port 5756 connected to 192.168.100.3 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.04 GBytes 8.89 Gbits/sec
[ 5] 1.00-2.00 sec 1.07 GBytes 9.20 Gbits/sec
[ 5] 2.00-3.00 sec 1.05 GBytes 9.05 Gbits/sec [5] 3.00-4.00 sec 1.03 GBytes 8.86 Gbits/sec [5] 4.00-5.00 sec 1.07 GBytes 9.19 Gbits/sec [5] 5.00-6.00 sec 1.07 GBytes 9.22 Gbits/sec [5] 6.00-7.00 sec 1.08 GBytes 9.29 Gbits/sec [5] 7.00-8.00 sec 1.07 GBytes 9.21 Gbits/sec [5] 8.00-9.00 sec 1005 MBytes 8.44 Gbits/sec [5] 9.00-10.00 sec 1.03 GBytes 8.84 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - [ID] Interval Transfer Bitrate
[5] 0.00-10.00 sec 10.5 GBytes 9.02 Gbits/sec sender
[5] 0.00-10.01 sec 10.5 GBytes 9.01 Gbits/sec receiver

iperf Done.

But through TrueNAS’s SMB share, it can only run about 3.5Gbps
But it used to run normally at the 10G level, but it suddenly slowed down one day

Host information:
CPU: Intel(R) Xeon(R) CPU E5-2678 v3

PVE information:
256GiB memory
10G network card: ens6 MTU: 9014 Linux Bridge
Linux 6.8.12-4-pve (2024-11-06T15:04Z)
Manager version: pve-manager/8.3.3/f157a38b211595d6
Package version:


proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)

pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)

proxmox-kernel-helper: 8.1.0

proxmox-kernel-6.8: 6.8.12-8

proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8

proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4

proxmox-kernel-6.8.12-3-pve-signed: 6.8.12-3

proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2

proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2

ceph-fuse: 17.2.7-pve3

corosync: 3.1.7-pve3

criu: 3.17.1-2+deb12u1

glusterfs-client: 10.3-5

ifupdown2: 3.2.0-1+pmx11

ksm-control-daemon: 1.5-1

libjs-extjs: 7.0.0-5

libknet1: 1.28-pve1

libproxmox-acme-perl: 1.5.1

libproxmox-backup-qemu0: 1.5.1

libproxmox-rs-perl: 0.3.4

libpve-access-control: 8.2.0

libpve-apiclient-perl: 3.3.2

libpve-cluster-api-perl: 8.0.10

libpve-cluster-perl: 8.0.10

libpve-common-perl: 8.2.9

libpve-guest-common-perl: 5.1.6

libpve-http-server-perl: 5.2.0

libpve-network-perl: 0.10.0

libpve-rs-perl: 0.9.1

libpve-storage-perl: 8.3.3

libspice-server1: 0.15.1-1

lvm2: 2.03.16-2

lxc-pve: 6.0.0-1

lxcfs: 6.0.0-pve2

novnc-pve: 1.5.0-1

openvswitch-switch: 3.1.0-2+deb12u1

proxmox-backup-client: 3.3.2-1

proxmox-backup-file-restore: 3.3.2-2

proxmox-firewall: 0.6.0

proxmox-kernel-helper: 8.1.0

proxmox-mail-forward: 0.3.1

proxmox-mini-journalreader: 1.4.0

proxmox-offline-mirror-helper: 0.6.7

proxmox-widget-toolkit: 4.3.4

pve-cluster: 8.0.10

pve-container: 5.2.3

pve-docs: 8.3.1

pve-edk2-firmware: 4.2023.08-4

pve-esxi-import-tools: 0.7.2

pve-firewall: 5.1.0

pve-firmware: 3.14-3

pve-ha-manager: 4.0.6

pve-i18n: 3.3.3

pve-qemu-kvm: 9.0.2-5

pve-xtermjs: 5.3.0-3

qemu-server: 8.3.7

smartmontools: 7.3-pve1

spiceterm: 3.3.0

swtpm: 0.8.0+pve1

vncterm: 1.8.0

zfsutils-linux: 2.2.7-pve1

TrueNAS virtual machine information:
CPU: host 1sockets,24cores
126GiB memory
Disks are used as pass-through PCI devices in PVE

HDD disk array:
Data vdev 1 x RAIDZ1 | 8 disks | 14.55 TiB (newly purchased Seagate hard drives)
Metadata vdev 1 x MIRROR | 2 disks | 238.47 GiB (SSD)
log vdev 2 x DISK | 1 disk | 238.47 GiB (SSD)
Cache vdev 1 x 232.89 GiB (SSD)
Array utilization: 36.2%
ZFS health: online
Failed S.M.A.R.T. Tests: 0

TrueNAS-SCALE-ElectricEel - TrueNAS SCALE ElectricEel 24.10 [release]

TrueNAS-SCALE-24.10.2
Standard PC (Q35 + ICH9, 2009)
10G NIC information: VirtIO MTU:9014 Multiqueue:24

Windows information:
Version Windows 11 Professional Edition
Version number 24H2
Installation date 2024/9/2
Operating system version 26100.2605
Experience Windows feature experience package 1000.26100.36.0
10G NIC: External 10G NIC via Thunderbolt 4
Model: QNA-T310G1S (QNAP)
Identified as 10.0Gbps in Windows system

I had exactly same problem, have a read here Very slow SMB speed vs competitors

Short answer, make sure CPUs assigned to VM are passed through as Host and not QEMU or any other kind.

Likely Windows uses signing/encryption that can be sped up massively with proper CPU instructions.

1 Like

I have mentioned in the post:
TrueNAS virtual machine information:
CPU: host 1sockets, 24cores

Ok, then you have to check that SMB multichannel is enabled and Samba correctly recognizes RSS capability of your NIC. Run powershell commands on Windows client to make sure Multichannel is established and RSS capability is enabled (you can see all them in my thread).

Also make sure you use jumbo frames, 9000 on VM, 9014 on Windows

9014 is wrong mtu size on Linux side IMO

Since you pass Virtual NIC, likely Samba won’t be able to autodetect speed and RSS capability, so you need to force it via cli command with smb_options in TrueNAS

How to do this?

Format looks like this :

cli
service smb update smb_options="server min protocol = SMB3_11\nserver smb encrypt = required\nserver signing = required\nclient min protocol = SMB3_11\nclient smb encrypt = required\nclient signing = required"

What you want is to add this (ignore all above commands, it’s just example)

interfaces = "10.10.10.10;capability=RSS,speed=10000000000"

Don’t forget to restart smbd after you do it

I enabled jumbo packet 9014 on the Windows network card, and set the MTU of the VM network card and Linux Bridge to 9000.
But for some reason, the SMB speed has become around 1~2Gbps.
This is a new test of iperf3.

iperf3.exe -c 192.168.100.3
Connecting to host 192.168.100.3, port 5201
[  5] local 192.168.100.198 port 14859 connected to 192.168.100.3 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.84 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.91 Gbits/sec
[  5]   3.00-4.00   sec  1.13 GBytes  9.71 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   5.00-6.00   sec  1.14 GBytes  9.82 Gbits/sec
[  5]   6.00-7.00   sec  1.14 GBytes  9.79 Gbits/sec
[  5]   7.00-8.00   sec  1.14 GBytes  9.81 Gbits/sec
[  5]   8.00-9.00   sec  1.14 GBytes  9.80 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.89 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  11.4 GBytes  9.83 Gbits/sec                  sender
[  5]   0.00-10.01  sec  11.4 GBytes  9.82 Gbits/sec                  receiver

iperf Done.

After I restarted Windows, the SMB speed was around 800MB. Is this normal? Has the array reached its limit?

But it will drop to 200~300MB

Did you set MTU 9000 in 3 places on Proxmox? One on actual interface, second on bridge and third on Virt interface that you give to VM?

OK, I have now set it to 9000 on ens6 and tested the SMB speed again. It is stable at 800+MB. Is the current situation normal?

I get 1.15gb/s off NVMe or after repeat reads from spinning disks with l2arc cache. Spinning disk alone won’t saturate 10gbps

HDD disk array:
Data vdev 1 x RAIDZ1 | 8 disks | 14.55 TiB (newly purchased Seagate hard drives)
Metadata vdev 1 x MIRROR | 2 disks | 238.47 GiB (SSD)
log vdev 2 x DISK | 1 disk | 238.47 GiB (SSD)
Cache vdev 1 x 232.89 GiB (SSD)

I read that, try to copy one big file (50gb+) multiple times, hopefully it will end up in cache. How fast is cache SSD?

After a period of stable 10Gbps, the speed suddenly dropped to around 600MB when copying files today. I retested using iperf and the results were as follows:

But the Windows network card still shows 10Gbps

[  5] local 100.70.249.108 port 4193 connected to 192.168.100.3 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  64.6 MBytes   542 Mbits/sec
[  5]   1.00-2.00   sec  68.0 MBytes   570 Mbits/sec
[  5]   2.00-3.00   sec  67.0 MBytes   562 Mbits/sec
[  5]   3.00-4.00   sec  68.4 MBytes   574 Mbits/sec
[  5]   4.00-5.00   sec  69.8 MBytes   585 Mbits/sec
[  5]   5.00-6.00   sec  65.6 MBytes   551 Mbits/sec
[  5]   6.00-7.00   sec  64.8 MBytes   543 Mbits/sec
[  5]   7.00-8.00   sec  69.0 MBytes   579 Mbits/sec
[  5]   8.00-9.00   sec  66.6 MBytes   559 Mbits/sec
[  5]   9.00-10.00  sec  60.0 MBytes   503 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   664 MBytes   557 Mbits/sec                  sender
[  5]   0.00-10.01  sec   660 MBytes   553 Mbits/sec                  receiver

I restarted the network card on Windows and it didn’t work.

After I adjusted the encryption to negotiated on TrueNAS, the speed suddenly returned to around 8Gbps(Fastcopy shows a speed of 800MB~1000MB/s)

But the iperf test was still very low

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   661 MBytes   554 Mbits/sec                  sender
[  5]   0.00-10.01  sec   659 MBytes   552 Mbits/sec                  receiver