SMB and MacOS Finder speed, again

So there have been a ton of post on this. I want to share what I have done and ask if anyone else has any better ideas.

It is clear to me the issue is on the MacOS side. For the most part everything works however opening any finder window with a large number of files takes on the order of minutes. This is from a Mac that is on wired network at 2.5Gbps to a TrueNAS MiniXL+ that is connected at 2x10GBps, has a a ZIL, SLOG, L2ARC etc. 8x14TB in mirrored / striped setup. How do I know it is MacOS? There is an application on IOS called FileBroswer. On my iPad, over Wifi, mounting the same share over SMB, opening the same folder with their application takes a few seconds.

The issue is MacOS and the finder. Here are a few things I have done that speed it up but it still really sucks.

#/etc/nsmb.conf
[default]
mc_on=yes
mc_prefer_wired=yes
signing_required=no
streams=yes
dir_cache_off=yes
dir_cache_async_cnt=0
dir_cache_max_cnt=0
dir_cache_max=0
dir_cache_min=0

Do not create .DS_store files:
defaults write com.apple.desktopservices DSDontWriteNetworkStores false

Do not read .DS_store files:
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE

In Finder
View > View Options > Show Icon Preview (off), Show Item Info (off)

Nothing else I have tried has made much of a difference.

Anyone have anything that is missing from this?

If you don’t need user authentication, you could go for NFS.
Super fast when you come from SMB.
But NFS3 (Connect server in finder) and NFS4 (when you manually create the mount with the right settings) will give you other issues like a wrong ‘modified at’ date.

But roaming around and deleting files is suprisingly fast with NFS if you come from SMB.

Something is off… it shouldn’t take minutes. Have you configured the L2ARC to be metadata only and persistent? If so, has it been given a chance to get hot? Even on my 13.3 CORE machine, traversing a 1.4TB iTunes dataset with 40K+ files only took minutes.

Are you sure you have a fast, low-latency connection to the NAS with good drivers, etc? Have you run a iperf test to at least see what the theoretical throughput is with random data?

I suspect something silly like a bad negotiated speed (10Mbit/s ?), a bad cable connection, or driver issue.

Yes, I used to use NFS but wanted to remove the complexity of running two protocols on the same shares as the Infuse App that I use on the AppleTV only supports SMB and I also have a Windows system.

When you say your iTunes data set, are you talking about opening the application or actual browsing of the files via the Finder? Your 40K file set is not in the same directory. On this system, my media for TV/Movies is 8TB with 8000+ items, but in Finder is it fine because they are not all in the same folder. What I am talking about is opening a folder in Finder that has 3000+ photos in it.

The Mac and the NAS are on the same switch. All ports are configured correctly at the correct speeds. iperf3 hits the theoretical maximum in testing with no issue. The switch ports have no errors or drops that matter.

My ARC and L2ARC hit rates are both ~98%. Metadata hits are at 100%. If I open the folder, let it fill with no other activity on the TrueNAS system, close the finder window and open it again it still takes minutes. This issue is very much about how poorly MacOS Finder work with SMB. A Windows 11 machine on the same switch has no issues. Heck, I just paid $14 for the MacOS version of FileBrowser Pro and using it to mount and browse it quite fast. If I attempt to use it to browse the folder that is mounted by the MacOS system (thus it is going through the Finder) it is just as slow as the Finder.

Which version of TrueNAS are you running?
I assume SCALE and not CORE, which release?
How much RAM do you have?
Which version of macOS?

The macOS Finder will walk the share trying to build an index of metadata, under CORE and AFP you needed to move the CNID database to a very fast zpool (mirror/striped enterprise SSD with over 51,000 I/Ops) to get good Finder performance opening a share.

I have not seen finder opening delays since going to SCALE and SMB shares (TrueNAS Dragonfish-24.04.2.5, I have not upgraded to 24.10 yet; macOS 15.3.1). My server has dual E5-2623 v4 @ 2.60GHz and 192GiB of ECC RAM, dual (bonded) 10 Gig SFP+ connections to my switch and either Wi-Fi or 1G ethernet to the Mac (which is an M1 with 32GiB of RAM). The zpool is 6 x MIRROR | 2 wide | 1.82 TiB with a hot spare, no SLOG or L2ARC.

That’s why I called it a dataset. It’s the entire thing. All files, the DB, the XML, etc. Altogether 40k+ files across thousands of folders. rsync checks every file individually for changes and copies anything it marked as changed.

There is likely your problem. If the finder settings include creatiing a thumbnail of each photo, I wonder if the finder is spending minutes getting the necessary data for each image to render it in the MacOS finder. Try traversing from the CLI command line and see if it’s faster?

If the CLI approach is super quick and the Finder approach is slow, it’s likely the rendering that’s the problem. Not sure if the rendered thumbnails get saved or not, but I doubt it. I’d change one of two things - fewer files in a given folder or turning off thumbnails.

MacOS 15.3.1 M3 18G RAM, 2.5G networking via thunderbolt

TrueNAS ElectricEel-24.10.2
64 GB of ECC RAM (Max on the TrueNAS MiniXL+)

root@barrel[/etc/netdata]# zpool status
  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:32 with 0 errors on Fri Feb 28 03:45:33 2025
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

  pool: data
 state: ONLINE
  scan: scrub repaired 0B in 09:20:27 with 0 errors on Sun Mar  2 09:20:29 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        data                                      ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            d5980856-e598-42d2-8c6f-808a2950195f  ONLINE       0     0     0
            430beae9-48c8-4de7-b772-467a94392070  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            4d7a9afa-e237-4608-b434-7d54495732a1  ONLINE       0     0     0
            2d9565e0-e9fd-44eb-b0cf-dd4e6909237d  ONLINE       0     0     0
          mirror-2                                ONLINE       0     0     0
            2b22f7a3-15fd-4706-b86a-3721acc60a8d  ONLINE       0     0     0
            0234d759-c141-48ba-93ae-3aedb213c1d4  ONLINE       0     0     0
          mirror-3                                ONLINE       0     0     0
            7a0b8ef4-e5b5-4898-8f15-af8861f50b86  ONLINE       0     0     0
            709a66a0-3933-4665-9883-ed5968f85a6a  ONLINE       0     0     0
        logs
          5d3e6e42-a7cb-4c30-9d26-50b0983e9487    ONLINE       0     0     0
        cache
          8c67c3fc-2dd0-41cf-a0a2-28242784f043    ONLINE       0     0     0

The 8 disk are the 14TB spinning rust from IXsystems. The Logs and Cache are their spec ones (SSDs). All the data for the folder should fit in RAM and L2ARC with no issue.

Is the L2ARC set to be persistent (yes by default in SCALE, not so in CORE) and also set to metadata=only per the tuneables?

On a freshly rebooted system running SCALE.

L2ARC is NOT set to metadata only. There are 25170 files in the directory.

On Mac:

bash-3.2$ for ((i = 0 ; i < 5 ; i++)); do /usr/bin/time -a -o time.txt /bin/ls; done

bash-3.2$ cat time.txt
       35.95 real         0.16 user         2.55 sys
       33.75 real         0.16 user         2.36 sys
       37.84 real         0.16 user         2.79 sys
       37.79 real         0.16 user         2.88 sys
       37.20 real         0.15 user         2.85 sys

NAS:

arc_summary | grep -I hit
        Total hits:                                    99.6 %      30.2M
        Total I/O hits:                               < 0.1 %       3.0k
        Demand data hits:                              86.8 %     590.4k
        Demand data I/O hits:                         < 0.1 %        180
        Demand metadata hits:                          99.9 %      29.6M
        Demand metadata I/O hits:                     < 0.1 %        765
        Prefetch data hits:                            11.6 %       3.3k
        Prefetch data I/O hits:                         0.0 %          0
        Prefetch metadata hits:                        75.2 %      14.5k
        Prefetch metadata I/O hits:                    10.4 %       2.0k
        Demand hits after predictive:                  56.2 %      26.8k
        Demand I/O hits after predictive:               5.3 %       2.5k
        Demand hits after prescient:                   75.9 %         82
        Demand I/O hits after prescient:               24.1 %         26
ARC states hits of all accesses:
        Stream hits:                                   22.0 %      83.4k
        Hits ahead of stream:                           9.3 %      35.3k
        Hits behind stream:                            13.6 %      51.5k
        Hit ratio:                                     84.3 %      99.2k
        zfs_read_history_hits                                          0
root@barrel[/etc/netdata]# arc_summary | grep -i l2
        L2 header size:                                 2.9 %  269.8 MiB
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   1.3 MiB
        L2 eligible MFU evictions:                     11.3 %  144.5 KiB
        L2 eligible MRU evictions:                     88.7 %    1.1 MiB
        L2 ineligible evictions:                                 8.0 KiB
L2ARC status:                                                    HEALTHY
L2ARC size (adaptive):                                         470.8 GiB
L2ARC breakdown:                                                  117.8k
L2ARC I/O:
L2ARC evicts:
        l2arc_exclude_special                                          0
        l2arc_feed_again                                               1
        l2arc_feed_min_ms                                            200
        l2arc_feed_secs                                                1
        l2arc_headroom                                                 8
        l2arc_headroom_boost                                         200
        l2arc_meta_percent                                            33
        l2arc_mfuonly                                                  0
        l2arc_noprefetch                                               0
        l2arc_norw                                                     0
        l2arc_rebuild_blocks_min_l2size                       1073741824
        l2arc_rebuild_enabled                                          1
        l2arc_trim_ahead                                               0
        l2arc_write_boost                                       40000000
        l2arc_write_max                                         10000000

Set the system to L2ARC metadata only and rebooted and ran the same test.

Mac:

       40.00 real         0.16 user         2.99 sys
       39.66 real         0.16 user         3.05 sys
       39.91 real         0.16 user         3.31 sys
       38.50 real         0.15 user         3.02 sys
       40.74 real         0.16 user         3.29 sys

NAS:

root@barrel[~]# arc_summary| grep -I hit
        Total hits:                                    99.5 %      21.6M
        Total I/O hits:                               < 0.1 %       2.4k
        Demand data hits:                              78.4 %     243.8k
        Demand data I/O hits:                         < 0.1 %         89
        Demand metadata hits:                          99.9 %      21.3M
        Demand metadata I/O hits:                     < 0.1 %        692
        Prefetch data hits:                             4.1 %        716
        Prefetch data I/O hits:                         0.0 %          0
        Prefetch metadata hits:                        76.2 %      12.4k
        Prefetch metadata I/O hits:                    10.1 %       1.6k
        Demand hits after predictive:                  50.5 %      17.0k
        Demand I/O hits after predictive:               6.6 %       2.2k
        Demand hits after prescient:                   75.3 %         64
        Demand I/O hits after prescient:               24.7 %         21
ARC states hits of all accesses:
        Stream hits:                                   34.0 %      51.0k
        Hits ahead of stream:                          12.8 %      19.2k
        Hits behind stream:                            15.1 %      22.6k
        Hit ratio:                                     92.5 %      77.9k
        zfs_read_history_hits                                          0
root@barrel[~]# arc_summary| grep -I l2
        L2 header size:                                16.0 %  276.5 MiB
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   1.3 MiB
        L2 eligible MFU evictions:                     11.2 %  144.0 KiB
        L2 eligible MRU evictions:                     88.8 %    1.1 MiB
        L2 ineligible evictions:                                 8.0 KiB
L2ARC status:                                                    HEALTHY
L2ARC size (adaptive):                                         473.9 GiB
L2ARC breakdown:                                                   84.3k
L2ARC I/O:
L2ARC evicts:
        l2arc_exclude_special                                          0
        l2arc_feed_again                                               1
        l2arc_feed_min_ms                                            200
        l2arc_feed_secs                                                1
        l2arc_headroom                                                 8
        l2arc_headroom_boost                                         200
        l2arc_meta_percent                                            33
        l2arc_mfuonly                                                  0
        l2arc_noprefetch                                               0
        l2arc_norw                                                     0
        l2arc_rebuild_blocks_min_l2size                       1073741824
        l2arc_rebuild_enabled                                          1
        l2arc_trim_ahead                                               0
        l2arc_write_boost                                       40000000
        l2arc_write_max                                         10000000

Trying to open a Finder window on this exact share after running these test takes 5 minutes.

Opening the same folder via the MacOS store FileBroswer Pro application, using their SMB mount works no issues and even quickly builds thumbnails.

Then I suggest you use that approach to open those gargantuan folders.

I do not experience the issues you’re wrestling with, but I am also not trying to open photo folders with thousands of files. It may have to do with the sVDEV serving up the metadata quickly, it may also have to do with burdening my NAS with fare fewer images per folder. Maybe someone else has experienced this particular issue?

I agree this is the issue for the MacOS SMB code + Finder. It is very interesting that the FileBroswer code does not share this issue. The question revolves around is there is something in the MacOS default SMB setup that is just wrong that can be changed to match the behavior of the FileBrowser application or is this really just the Finder code sucking. Based on my testing, I am leaning toward the Finder sucks.

It is how MacOS uses SMB.

Looking at the Wireshark capture.

Finder & FileBroswer SMB both issue:

Info Level: SMB2_FIND_ID_BOTH_DIRECTORY_INFO (37)

Finder for every file then issues right after:

Close Request (0x06)

FileBroswer on the other hand issues the same FIND but never issues the close until it gets this:

NT Status: STATUS_NO_MORE_FILES (0x80000006)

I have not looked at the SMB code since OS/2 days. Will have to do some digging but a quick and dirty pcap seems to point to Finder doing to much work. Hum, is streams working? Need to have a look.

And looking at the logs on TrueNAS.

MacOS SMB3_11 w/signing
FileBroswer SMB3_02 w/out signing

I have configure signing to be OFF on MacOS but that seems to not matter. Let’s figure how to fix that and I bet it will just work. On TrueNAS I have it set to “Negotiate – only encrypt transport if explicitly requested by the SMB client”, so the Mac is forcing it.

I suspect this the real issue. The macOS Finder, in addition to building thumbnails (which I think it does asynchronously) pulls metadata for each file / directory. The larger the number of items in one directory the slower Finder will be to display the listing. A directory with about 500 items in it took about 6 seconds to open in Finder. A sub-directory with over 3,000 items opens very fast, so I suspect that Finder is asynchronously walking the directory tree capturing metadata once the mount is opened in Finder.

So grouping the data in sub-directories would improve the initial listing, and then the background (asynchronous) metadata fetch would make opening sub-directories faster.

I also suspect you would have a similar, but not to the same magnitude, issue if the data were all on a local drive.

1 Like

As I just posted, I do not think this is an SMB issue but a limitation of macOS Finder. Have you tried opening a directory of over 10,000 items on a local drive? I expect it will be faster than SMB, but still not very fast.

Many years ago I was part of a team that tested filesystems for scaling. We had a use case for ONE VOLUME with millions of files/directories. The only filesystem we found in 2008 that scaled linearly beyond 1 million items was ZFS.

1 Like

A while back I did a comparison between different clients on directory listing times alone. It was very interesting and relevant here:

https://www.truenas.com/docs/references/performance/smbfiletimes/

2 Likes

Have you tried this: