High Read\Right System Freeze

Hello I am running the following configuration

TrueNAS-13.0-U6.2

I know I need to provide more system information, please let me know what is needed

If I am doing a high I/O operation like downloading a bunch of files to the server or reading a bunch of files from the server like syncing to Amazon Photos through a share. The Server hangs, TrueNas OS goes out to lunch and requires me to reboot the server for the windows share to start responding again. Has anyone ran into this?

System specs for starters, especially RAM. Read Joes Rules to Asking for Help.

BIOS Information

*         Vendor: American Megatrends Inc.
*         Version: 4.6.4
*         Release Date: 03/05/2013
*         Runtime Size: 64 kB
*         ROM Size: 1 MB
*         BIOS Revision: 4.6

System Information

*         Manufacturer: BIOSTAR Group
*         Product Name: H61MGC
*         Version: 6.0

Base Board Information

*         Manufacturer: BIOSTAR Group
*         Product Name: H61MGC
*         Version: 6.0

RAM - 16 GB of RAM

Central Processor

*         Family: Core i7
*         Manufacturer: Intel
*         ID: A7 06 02 00 FF FB EB BF
*         Signature: Type 0, Family 6, Model 42, Stepping 7
*         Version: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
*         Voltage: 0.0 V
*         External Clock: 100 MHz
*         Max Speed: 3800 MHz
*         Current Speed: 1600 MHz

Storage

  • SanDisk SSD U110 32GB ST6000NE000-
  • 2KR101 5.46 TiB ZADABWMK
  • ST6000VN0033-2EE110 5.46 TiB V9GJ8ZNL
  • HGST HUS726T6TALE6L4 5.46 TiB WSD4FX8G
  • ST6000NE000-2KR101 5.46 TiB
  • P1702615000000001465 SPCC Solid State Disk 55.9 GiB
1 Like

If I am finding the correct info for the motherboard, it shows Realtek for networking and 4 SATA 2 connectors.

How are your drives connected? What is your drive layout and usages? I’m guessing SanDisk SSD is boot, four conventional HD is your pool and what is the 56GB solid state disk for?

2 Likes

PSU? Pool layout? Run memtest86+. Board in question.

Family: Core i7

  •     Manufacturer: Intel
    
  •     ID: A7 06 02 00 FF FB EB BF
    
  •     Signature: Type 0, Family 6, Model 42, Stepping 7
    
  •     Version: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
    

Intel(R) Core™ i7-2600 CPU @ 3.40GHz
It won’t let me upload a screenshot of my pool config


  pool: mt-fspool
 state: ONLINE
  scan: scrub repaired 0B in 05:04:11 with 0 errors on Sun Oct 13 05:04:12 2024
config:

        NAME        STATE     READ WRITE CKSUM
        mt-fspool   ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
root@mt-nas[~]#
root@mt-nas[~]# zfs list
NAME                                                         USED  AVAIL     REFER  MOUNTPOINT
boot-pool                                                   16.8G  21.4G       24K  none
boot-pool/.system                                           80.9M  21.4G     2.96M  legacy
boot-pool/.system/configs-a617fea1951f4cf0a2768ef9d432a087  50.5M  21.4G     50.5M  legacy
boot-pool/.system/cores                                       24K  1024M       24K  legacy
boot-pool/.system/rrd-a617fea1951f4cf0a2768ef9d432a087      21.6M  21.4G     21.6M  legacy
boot-pool/.system/samba4                                    2.48M  21.4G      114K  legacy
boot-pool/.system/services                                    24K  21.4G       24K  legacy
boot-pool/.system/syslog-a617fea1951f4cf0a2768ef9d432a087   3.31M  21.4G     3.31M  legacy
boot-pool/.system/webui                                       24K  21.4G       24K  legacy
boot-pool/ROOT                                              16.7G  21.4G       24K  none
boot-pool/ROOT/13.0-RELEASE                                  205K  21.4G     1.28G  /
boot-pool/ROOT/13.0-U1                                       174K  21.4G     1.28G  /
boot-pool/ROOT/13.0-U1.1                                     170K  21.4G     1.28G  /
boot-pool/ROOT/13.0-U2                                       166K  21.4G     1.28G  /
boot-pool/ROOT/13.0-U3                                       173K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U3.1                                     173K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U4                                       166K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U5.1                                     172K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U5.3                                     179K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U6                                       194K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U6.1                                     178K  21.4G     1.29G  /
boot-pool/ROOT/13.0-U6.2                                    16.7G  21.4G     1.29G  /
boot-pool/ROOT/Initial-Install                                 1K  21.4G     1.20G  legacy
boot-pool/ROOT/default                                       186K  21.4G     1.20G  legacy
mt-fspool                                                   4.26T  6.18T      250K  /mnt/mt-fspool
mt-fspool/FS1                                               4.25T  6.18T     4.25T  /mnt/mt-fspool/FS1
mt-fspool/iocage                                            9.90G  6.18T     9.22M  /mnt/mt-fspool/iocage
mt-fspool/iocage/download                                   1.06G  6.18T      151K  /mnt/mt-fspool/iocage/download
mt-fspool/iocage/download/12.3-RELEASE                       403M  6.18T      403M  /mnt/mt-fspool/iocage/download/12.3-RELEASE
mt-fspool/iocage/download/13.0-RELEASE                       248M  6.18T      248M  /mnt/mt-fspool/iocage/download/13.0-RELEASE
mt-fspool/iocage/download/13.1-RELEASE                       434M  6.18T      434M  /mnt/mt-fspool/iocage/download/13.1-RELEASE
mt-fspool/iocage/images                                      140K  6.18T      140K  /mnt/mt-fspool/iocage/images
mt-fspool/iocage/jails                                      4.54G  6.18T      140K  /mnt/mt-fspool/iocage/jails
mt-fspool/iocage/jails/AVJail                               4.54G  6.18T      442K  /mnt/mt-fspool/iocage/jails/AVJail
mt-fspool/iocage/jails/AVJail/root                          4.54G  6.18T     1.70G  /mnt/mt-fspool/iocage/jails/AVJail/root
mt-fspool/iocage/log                                         407K  6.18T      145K  /mnt/mt-fspool/iocage/log
mt-fspool/iocage/releases                                   4.28G  6.18T      151K  /mnt/mt-fspool/iocage/releases
mt-fspool/iocage/releases/12.3-RELEASE                      1.70G  6.18T      140K  /mnt/mt-fspool/iocage/releases/12.3-RELEASE
mt-fspool/iocage/releases/12.3-RELEASE/root                 1.70G  6.18T     1.70G  /mnt/mt-fspool/iocage/releases/12.3-RELEASE/root
mt-fspool/iocage/releases/13.0-RELEASE                       793M  6.18T      140K  /mnt/mt-fspool/iocage/releases/13.0-RELEASE
mt-fspool/iocage/releases/13.0-RELEASE/root                  793M  6.18T      792M  /mnt/mt-fspool/iocage/releases/13.0-RELEASE/root
mt-fspool/iocage/releases/13.1-RELEASE                      1.81G  6.18T

SSD is the boot you are correct
traditional HHD are connected, 6 TB WD 7200 RPM are connected in the pool
The 56 GB solid state is for cache

Read or Write cache? You really don’t have enough RAM to use L2ARC as its use just eats into memory available for the regular ARC usage. Usually at least 32 to 64 is the starting point on RAM before adding L2ARC, if necessary.

See ZFS Read Cache section
https://www.truenas.com/docs/references/zfsprimer/

1 Like

Ok I think you may be on to something, I have to dig back into how I initially set it up, the 56 GB solid state is my boot drive. I have a separate 32 solid state that I set up as a cache drive. Since I have an older motherboard the maximum amount of RAM I can add to it is 16 GB. right now the ration for the arch size is set to 29:1, so 14.4 GB I wonder when experiencing high reads it simply runs out of RAM and the system hangs. Is there way way to lower that threashold down?

Just remove the L2ARC device. The system will then be using just RAM and plain ARC for file cache purposes. Start plain and simple for now. You can watch your ARC hit stats in the web ui over time and see how you fair.

1 Like

Ok thank you I will try that and start running a sync process that typically crashes the machine

Thank you very much for the advice

Yeah I see now that the process is running my ZFS Cache in Ram is up to 7.9 GB, 3.4 GB free, and 4.1 GB in service

Yeah its maxing out the cache at 10.3 GB now, so I see where this was the problem if the cache allowed it to exceed to 14.4 GB

You fiddled with the default value?

No I just monitored the memory usage in the TrueNas Core UI when I was performing the large read tasks simultaneously that would typically freeze the system. The memory usage after removing the physical drive cache would climb to 10.3 GB for the ZFS Cache then max out leaving .3-.5 GB free of system memory with the other memory left being used for services. so I am sure that the L2ARC drive cache set to maximize at 14.4 GB was running the system out of memory.

I guess the next question is, do I restore the cache drive, and cap the memory usage to 8 GB?

No. You should have at least 64GB of RAM before using L2ARC.

2 Likes