SMB vs. NFS with macOS clients... FIGHT!

Hi all.

I would like to share some benchmarks with the community. To get your opinions, or perhaps to encourage improvements.

The objective of the benchmark is to compare SMB and NFS performance on Dragonfish-24.04.2, with macOS clients (Sonoma 14.5, Intel).

The TrueNAS system (non-virtualized) consists of:

  • Supermicro X10SLM±F
  • 32 GB RAM (ECC)
  • Intel Xeon Processor E3-1275L v3
  • Intel Ethernet Adapter X540-T1 (with jumbo frames on the server, and also on the Mac 10Gbe card)
  • LSI SAS 9210-8i

For storage:

  • 7x WUH721818AL, striped: empty, no data.
  • 2 datasets, nfs (unix persmissions) y smb_acl (acl permissions)
  • With the following configuration (all default except the following):
# zfs get -r -s local all pool
NAME           PROPERTY                 VALUE                    SOURCE
pool           compression              lz4                      local
pool           atime                    off                      local
pool           aclmode                  discard                  local
pool           aclinherit               discard                  local
pool           primarycache             none                     local
pool           acltype                  posix                    local
pool/nfs       compression              off                      local
pool/nfs       snapdir                  hidden                   local
pool/nfs       aclmode                  discard                  local
pool/nfs       aclinherit               discard                  local
pool/nfs       xattr                    sa                       local
pool/nfs       copies                   1                        local
pool/nfs       acltype                  posix                    local
pool/nfs       org.freenas:description                           local
pool/nfs       org.truenas:managedby    192.168.1.10             local
pool/smb_acl   compression              off                      local
pool/smb_acl   aclmode                  restricted               local
pool/smb_acl   aclinherit               passthrough              local
pool/smb_acl   xattr                    sa                       local
pool/smb_acl   copies                   1                        local
pool/smb_acl   acltype                  nfsv4                    local
pool/smb_acl   org.freenas:description                           local
pool/smb_acl   org.truenas:managedby    192.168.1.10             local

Before the benchmarks, I disabled ARC using:

zfs set primarycache=none pool
zpool export pool
zpool import pool

The benchmark software is AmorphousDiskMark set to 1GiB and 9 passes.

The results:

NFS:

SMB:

Basically, SMB is slightly better in SEQ than NFS, but much worse in RND.

For completeness, the server configuration is all default except for the following:

  • SMB: [√] Enable Apple SMB2/3 Protocol Extensions. And in the share, purpose configured as Default share parameters.

  • NFS: [√] Allow non-root mount.

And, these are the mount options on macOS:

% smbutil statshares -a

==================================================================================================
SHARE                         ATTRIBUTE TYPE                VALUE
==================================================================================================
smb_acl                       
                              SERVER_NAME                   truenas._smb._tcp.local
                              USER_ID                       501
                              SMB_NEGOTIATE                 SMBV_NEG_SMB1_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB2_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB3_ENABLED
                              SMB_VERSION                   SMB_3.1.1
                              SMB_ENCRYPT_ALGORITHMS        AES_128_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_128_GCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_GCM_ENABLED
                              SMB_CURR_ENCRYPT_ALGORITHM    OFF
                              SMB_SIGN_ALGORITHMS           AES_128_CMAC_ENABLED
                              SMB_SIGN_ALGORITHMS           AES_128_GMAC_ENABLED
                              SMB_CURR_SIGN_ALGORITHM       AES_128_GMAC
                              SMB_SHARE_TYPE                DISK
                              SIGNING_SUPPORTED             TRUE
                              EXTENDED_SECURITY_SUPPORTED   TRUE
                              UNIX_SUPPORT                  TRUE
                              LARGE_FILE_SUPPORTED          TRUE
                              OS_X_SERVER                   TRUE
                              FILE_IDS_SUPPORTED            TRUE
                              DFS_SUPPORTED                 TRUE
                              FILE_LEASING_SUPPORTED        TRUE
                              MULTI_CREDIT_SUPPORTED        TRUE
                              SESSION_RECONNECT_TIME        0:0
                              SESSION_RECONNECT_COUNT       0
% nfsstat -m
/Users/vicmarto/nfs from 10.0.1.150:/mnt/pool/nfs
  -- Original mount options:
     General mount flags: 0x0
     NFS parameters: vers=3,rsize=131072,wsize=131072,readahead=128,dsize=131072
     File system locations:
       /mnt/pool/nfs @ 10.0.1.150 (10.0.1.150)
  -- Current mount parameters:
     General mount flags: 0x4000000 multilabel
     NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,nointr,noresvport,negnamecache,callumnt,locks,quota,rsize=131072,wsize=131072,readahead=128,dsize=131072,rdirplus,nodumbtimer,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,acrootdirmin=5,acrootdirmax=60,nomutejukebox,nonfc,sec=sys
     File system locations:
       /mnt/pool/nfs @ 10.0.1.150 (10.0.1.150)
     Status flags: 0x0

The ZFS file system is quite difficult to benchmark, I’ve probably made some procedural error. If so, I am open to learn! :sweat_smile:

2 Likes

Results look good… SMB is better at large files, NFS is better at small transactions.

How did you setup sync write behavior?

1 Like

Thanks for the reply.

Yes, I knew about those differences… but I didn’t think they were that big. It seems like SAMBA has a lot of room for improvement, or maybe it’s a limitation of the SMB protocol?

About sync writes:

# zfs get -r all pool | grep sync
pool sync standard default
pool/nfs sync standard default
pool/smb_acl sync standard default

Should I try another sync configuration?

It depends on your use-case;
sync=always is slower and more reliable
sync = never is faster and less reliable.

I personally don’t recommend disabling sync for production workloads. SMB clients can request to flush data and it’s generally not a good idea to lie about what has been performed.

3 Likes

Nice - have you tried NFSv4 on macOS ?

Yes, I am of the same opinion.
However, I did some quick tests and the numbers were not very different.

Is there no way to improve SMB performance in RND? The differences with NFS are significant.

Yes, the numbers were very similar to NFSv3.

Oh, there are generally ways to improve performance, but you have to diagnose what the bottlenecks are when you’re performing the test and also determine whether the numbers you’re seeing are actually relevant to workloads you put on the server. There aren’t any secret “go fast” buttons we hide from users :slight_smile:

Feature request?

Feature request?

Sure. We can name it “turbo”, and when it’s not set we can actively make the SMB server slower.

4 Likes

Or maybe “multichannel” :wink:

Also see use of “Turbo” badges by Porsche on their Electric Vehicle lineups. The Taycan is but one example. Marketing can make a lot of things truly weird. Not to be confused with an electric turbocharger, which can make ICE even more responsive / powerful / efficient / complicated.

Let’s think big: better call it “multiturbo”! :rofl:

2 Likes

With Precision Boost Optimizer

1 Like

“Multichannel” - if underutilising one connection isn’t enough.
Let your data creep even slower on multiple channels.