SMB vs. NFS with macOS clients... FIGHT!

Hi all.

I would like to share some benchmarks with the community. To get your opinions, or perhaps to encourage improvements.

The objective of the benchmark is to compare SMB and NFS performance on Dragonfish-24.04.2, with macOS clients (Sonoma 14.5, Intel).

The TrueNAS system (non-virtualized) consists of:

  • Supermicro X10SLM±F
  • 32 GB RAM (ECC)
  • Intel Xeon Processor E3-1275L v3
  • Intel Ethernet Adapter X540-T1 (with jumbo frames on the server, and also on the Mac 10Gbe card)
  • LSI SAS 9210-8i

For storage:

  • 7x WUH721818AL, striped: empty, no data.
  • 2 datasets, nfs (unix persmissions) y smb_acl (acl permissions)
  • With the following configuration (all default except the following):
# zfs get -r -s local all pool
NAME           PROPERTY                 VALUE                    SOURCE
pool           compression              lz4                      local
pool           atime                    off                      local
pool           aclmode                  discard                  local
pool           aclinherit               discard                  local
pool           primarycache             none                     local
pool           acltype                  posix                    local
pool/nfs       compression              off                      local
pool/nfs       snapdir                  hidden                   local
pool/nfs       aclmode                  discard                  local
pool/nfs       aclinherit               discard                  local
pool/nfs       xattr                    sa                       local
pool/nfs       copies                   1                        local
pool/nfs       acltype                  posix                    local
pool/nfs       org.freenas:description                           local
pool/nfs       org.truenas:managedby    192.168.1.10             local
pool/smb_acl   compression              off                      local
pool/smb_acl   aclmode                  restricted               local
pool/smb_acl   aclinherit               passthrough              local
pool/smb_acl   xattr                    sa                       local
pool/smb_acl   copies                   1                        local
pool/smb_acl   acltype                  nfsv4                    local
pool/smb_acl   org.freenas:description                           local
pool/smb_acl   org.truenas:managedby    192.168.1.10             local

Before the benchmarks, I disabled ARC using:

zfs set primarycache=none pool
zpool export pool
zpool import pool

The benchmark software is AmorphousDiskMark set to 1GiB and 9 passes.

The results:

NFS:

SMB:

Basically, SMB is slightly better in SEQ than NFS, but much worse in RND.

For completeness, the server configuration is all default except for the following:

  • SMB: [√] Enable Apple SMB2/3 Protocol Extensions. And in the share, purpose configured as Default share parameters.

  • NFS: [√] Allow non-root mount.

And, these are the mount options on macOS:

% smbutil statshares -a

==================================================================================================
SHARE                         ATTRIBUTE TYPE                VALUE
==================================================================================================
smb_acl                       
                              SERVER_NAME                   truenas._smb._tcp.local
                              USER_ID                       501
                              SMB_NEGOTIATE                 SMBV_NEG_SMB1_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB2_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB3_ENABLED
                              SMB_VERSION                   SMB_3.1.1
                              SMB_ENCRYPT_ALGORITHMS        AES_128_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_128_GCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_GCM_ENABLED
                              SMB_CURR_ENCRYPT_ALGORITHM    OFF
                              SMB_SIGN_ALGORITHMS           AES_128_CMAC_ENABLED
                              SMB_SIGN_ALGORITHMS           AES_128_GMAC_ENABLED
                              SMB_CURR_SIGN_ALGORITHM       AES_128_GMAC
                              SMB_SHARE_TYPE                DISK
                              SIGNING_SUPPORTED             TRUE
                              EXTENDED_SECURITY_SUPPORTED   TRUE
                              UNIX_SUPPORT                  TRUE
                              LARGE_FILE_SUPPORTED          TRUE
                              OS_X_SERVER                   TRUE
                              FILE_IDS_SUPPORTED            TRUE
                              DFS_SUPPORTED                 TRUE
                              FILE_LEASING_SUPPORTED        TRUE
                              MULTI_CREDIT_SUPPORTED        TRUE
                              SESSION_RECONNECT_TIME        0:0
                              SESSION_RECONNECT_COUNT       0
% nfsstat -m
/Users/vicmarto/nfs from 10.0.1.150:/mnt/pool/nfs
  -- Original mount options:
     General mount flags: 0x0
     NFS parameters: vers=3,rsize=131072,wsize=131072,readahead=128,dsize=131072
     File system locations:
       /mnt/pool/nfs @ 10.0.1.150 (10.0.1.150)
  -- Current mount parameters:
     General mount flags: 0x4000000 multilabel
     NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,nointr,noresvport,negnamecache,callumnt,locks,quota,rsize=131072,wsize=131072,readahead=128,dsize=131072,rdirplus,nodumbtimer,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,acrootdirmin=5,acrootdirmax=60,nomutejukebox,nonfc,sec=sys
     File system locations:
       /mnt/pool/nfs @ 10.0.1.150 (10.0.1.150)
     Status flags: 0x0

The ZFS file system is quite difficult to benchmark, I’ve probably made some procedural error. If so, I am open to learn! :sweat_smile:

2 Likes

Results look good… SMB is better at large files, NFS is better at small transactions.

How did you setup sync write behavior?

1 Like

Thanks for the reply.

Yes, I knew about those differences… but I didn’t think they were that big. It seems like SAMBA has a lot of room for improvement, or maybe it’s a limitation of the SMB protocol?

About sync writes:

# zfs get -r all pool | grep sync
pool sync standard default
pool/nfs sync standard default
pool/smb_acl sync standard default

Should I try another sync configuration?

It depends on your use-case;
sync=always is slower and more reliable
sync = never is faster and less reliable.

I personally don’t recommend disabling sync for production workloads. SMB clients can request to flush data and it’s generally not a good idea to lie about what has been performed.

5 Likes

Nice - have you tried NFSv4 on macOS ?

Yes, I am of the same opinion.
However, I did some quick tests and the numbers were not very different.

Is there no way to improve SMB performance in RND? The differences with NFS are significant.

Yes, the numbers were very similar to NFSv3.

Oh, there are generally ways to improve performance, but you have to diagnose what the bottlenecks are when you’re performing the test and also determine whether the numbers you’re seeing are actually relevant to workloads you put on the server. There aren’t any secret “go fast” buttons we hide from users :slight_smile:

Feature request?

2 Likes

Feature request?

Sure. We can name it “turbo”, and when it’s not set we can actively make the SMB server slower.

5 Likes

Or maybe “multichannel” :wink:

Also see use of “Turbo” badges by Porsche on their Electric Vehicle lineups. The Taycan is but one example. Marketing can make a lot of things truly weird. Not to be confused with an electric turbocharger, which can make ICE even more responsive / powerful / efficient / complicated.

Let’s think big: better call it “multiturbo”! :rofl:

2 Likes

With Precision Boost Optimizer

1 Like

“Multichannel” - if underutilising one connection isn’t enough.
Let your data creep even slower on multiple channels.

As of Sequoia, macOS still doesn’t support NFSv4.1 or 4.2 extensions. It also doesn’t support encryption without Kerberos, NIS, or AD. As far as I can tell TrueNAS doesn’t even have built-in support for providing LDAP, NIS, or domain services via Samba so it’s pretty insecure.

SMB performance with jumbo frames on both ends and some tweaks to the Mac’s /etc/nsmbd.conf settings give me better performance than SFTP or rsync. Aside from the frequent permission mismatches caused by SMB and filesystem attributes, SMB currently seems like the way to go on a LAN.

1 Like

Thanks @CodeGnome.

Would you mind sharing the settings to /etc/nsmbd.conf please?

We support joining AD, LDAP, and FreeIPA. It’s in the UI and documentation. Why do you think this support doesn’t exist?

There’s a difference between joining a domain and being a domain controller. We don’t build with the domain controller role in TrueNAS because a NAS appliance shouldn’t ever be a domain controller.

The keyword here was “providing.” Why shouldn’t it be able to be a domain controller if people want it to be? It can provide all kinds of other applications and services, so why would you specifically exclude AD or LDAP from the list?

I mean, I suppose one could set up an LDAP server or other SAMBA server as a DC inside a container or VM on TrueNAS, but that seems like needless complexity. For those who don’t run Windows networks, if you need a way to secure NFS then that means Kerberos. Since TrueNAS already comes with SAMBA support, what’s the argument for making the capability inaccessible via the GUI or middleware client?

If it’s a security argument, security exists on a continuum. It’s not a binary thing, so there are always mitigations available once you’ve defined the threat model.

You claimed that TrueNAS is insecure because of this, which is false. I was responding to what I perceived as unnecessary FUD being spread on forums.

In fact upstream documentation and general best-practice says not to do this (using same server as fileserver and DC).

  1. We do not offer the ability to run the built in samba server as a DC
  2. This does not have a negative security impact on TrueNAS