Hi everyone,
A bit of context first:
We are a small GIS-oriented company with 5 users. Data consists of raster maps (10s of thousands) and vector data stored in geodatabases. Raster files are usually maps, of varying size, from 5MB to ~100MB and come accompanied by 3-4 side files with a size under 10KB. File type is usually .tif or .jpeg/.jpg. Vector data is stored in geodatabases, and over time they reached sizes of 1-2 GB. We are usually working with ArcGIS PRO and/or QGIS.
All of the above, ~8TB of data is stored on a TrueNAS (25.10) machine with the following specs:
Hardware:
- MOBO - Asrock WRX80 Creator R2.0
- CPU - AMD Ryzen Threadripper PRO 5955WX 16-Cores
- RAM - 128 GB of Kingston Technology KSM32RD8/16HDR memory module 16 GB 1 x 16 GB DDR4 3200 MHz ECC
- NIC - Intel 800 E810-2CQDA2 100 Gigabit Ethernet Card E8102CQDA2
- NIC2 - From MOBO dual 10Gb from Aquantia
- HighPoint Rocket 1508 8x M.2 Port to PCIe 4.0 x16 NVMe HBA Controller with 6 x WD Black SN850X in raidZ2 +
- HighPoint Rocket 1528D with 3 x Solidigm D7-PS1030 1.6 used as SVDEV for the above
- 6 x 20TB raidz2 HDDs used for backups
Networking:
- Mikrotik CRS510-8XS-2XQ-IN switch for the fiber network
- Ubiquity gear: Dream Machine Pro → Switch Pro Max 16 → Switch Flex XG for the 10Gb network
- Generic Compatible 25GBASE-SR SFP28 850nm 100m DOM Duplex LC/UPC MMF Optical Transceiver Module - for the Mikrotik 25GB part
- Mikrotik XQ+85MP01D Compatible 100GBASE-SR4 QSFP28 850nm 100m DOM MPO-12/UPC MMF Optical Transceiver Module
- Intel E25GSFP28SR Compatible SFP28 25GBASE-SR 850nm 100m DOM Duplex LC/UPC MMF Optical Transceiver Module
- Intel E25GSFP28SR Compatible 25GBASE-SR SFP28 850nm 100m DOM Duplex LC/UPC MMF Optical Transceiver Module
- Intel SPTMBP1PMCDF Compatible 100GBASE-SR4 QSFP28 850nm 100m DOM MPO-12/UPC MMF Optical Transceiver Module
- MTP® Jumper, MTP®-12 UPC (Female) to MTP®-12 UPC (Female), 12 Fibers, Multimode (OM4), Plenum (OFNP), 0.35dB Max, Type B, Magenta
- Fiber Patch Cable, 2 Fibers, LC UPC Duplex to LC UPC Duplex, Multimode (OM4), Riser (OFNR), 2.0mm, Tight-Buffered, Aqua
To the above storage server, 3 workstations are connected using Intel Ethernet Network Adapter E810-XXVDA2 and a Mikrotik CRS510-8XS-2XQ-IN switch for the 25 Gb network, and for the 10 Gb network, I am using Ubiquiti gear: Dream Machine Pro → Switch Pro Max 16 → Switch Flex XG. The workstations are built upon X670/870 ProArt from Asus.
The NVMe Pool was recently upgraded from an ASUS Hyper Add-In Card + 4 x 4TB NVMe SSD in RAIDZ1, and the data was copied from the old NVMe pool using rsync and for the upgraded NVMe pool, the record size was set to 1M, and the metadata small block size was set to 128Kb beforehand, this way ensuring that the SVDEV will be used.
The problem I am facing, even with the latest upgrade, is very slow loading times of the projects, 3-5 minutes, and after they are loaded, everything happens very slowly when handling the projects. To export a file to or from a geodatabase takes more than 5 minutes. To clip a raster map, same, 5 minutes +. One of the solutions would be to split the projects into smaller projects, by data type or by region, but I would like to avoid doing that, as for some projects, it is essential to have all data in the same place.
From running several tests recommended by ChatGPT, it seems that my issue is with SAMBA having issues with reading many small files when working with ArcGIS/QGIS projects. When it comes to transferring large files between TrueNAS and Windows machines, everything looks fine when tested with a large single file around 2 GB transfer, but abysmal with small files.
A lot of things can go wrong with the above setup, so what should be adjusted and configured to improve the load performance and project handling on my TrueNAS? Is SAMBA even recommended in such cases?
Some write/read tests:
root@truenas[~]# dd if=/dev/zero of=/mnt/NVMe_Workspace/TrueNAS_GIS/SharedProjects/testfile bs=1M count=2000
sync
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 0.237518 s, 8.8 GB/s
root@truenas[~]# dd if=/mnt/NVMe_Workspace/TrueNAS_GIS/SharedProjects/testfile of=/dev/null bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 0.0861312 s, 24.3 GB/s
root@truenas[~]#
iperf results on TrueNAS side
C:\iperf3\iperf3.exe -c 10.15.5.100 -P 4
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 5.43 GBytes 4.66 Gbits/sec receiver
[ 8] 0.00-10.01 sec 8.13 GBytes 6.98 Gbits/sec receiver
[ 10] 0.00-10.01 sec 6.01 GBytes 5.16 Gbits/sec receiver
[ 12] 0.00-10.01 sec 8.07 GBytes 6.93 Gbits/sec receiver
[SUM] 0.00-10.01 sec 27.6 GBytes 23.7 Gbits/sec receiver
ROBOCOPY results of 100K small files
PS C:\Users\XXXXX> robocopy Z:\testsmall C:\localtest16 /E /NFL /NDL /MT:16
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Friday, November 7, 2025 4:57:57 PM
Source : Z:\testsmall\
Dest : C:\localtest16\
Files : *.*
Options : *.* /NDL /NFL /S /E /DCOPY:DA /COPY:DAT /MT:16 /R:1000000 /W:30
------------------------------------------------------------------------------
0%
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras
Dirs : 1 1 0 0 0 0
Files : 100000 100000 0 0 0 0
Bytes : 781.2 k 781.2 k 0 0 0 0
Times : 0:09:00 0:01:03 0:00:00 0:00:09
Speed : 12685 Bytes/sec.
Speed : 0.725 MegaBytes/min.
Ended : Friday, November 7, 2025 4:59:10 PM
This is how my data looks sorted by which type of files takes the most storage and also by count
As it can be seen, quite a lot of fragmentation.


