High temperatures on two SSDs

Hello, I’m running the RC2 of TN. i don’t know if this was present before the upgrade. Anyway, check the output below:

Blockquote
=== /dev/sda ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 31 (Min/Max 15/54)
zsh: command not found: #
=== /dev/sdb ===
Device Model: WDC WDS100T1R0A-68A4W0
194 Temperature_Celsius 0x0022 073 041 — Old_age Always - 27 (Min/Max 20/41)
zsh: command not found: #
=== /dev/sdc ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 29 (Min/Max 16/51)
zsh: command not found: #
=== /dev/sdd ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 31 (Min/Max 16/51)
zsh: command not found: #
=== /dev/sde ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 29 (Min/Max 17/106)
zsh: command not found: #
=== /dev/sdf ===
Device Model: WDC WD40EFAX-68JH4N1
194 Temperature_Celsius 0x0022 117 112 000 Old_age Always - 30
zsh: command not found: #
=== /dev/sdg ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 30 (Min/Max 16/57)
zsh: command not found: #
=== /dev/sdh ===
Device Model: WD Red SA500 2.5 2TB
194 Temperature_Celsius 0x0032 100 100 000 Old_age Always - 30 (Min/Max 16/106)

As you can see, I’m using the very same SDD for my pool (ignore SDF, it is a backup HDD) but 2 of them are reporting a maximum temp of 106° C!

All SSDs are installed in the same holder with a 140mm fan pointing at them, I hardly believe they really reach that temp, but when I was indexing my library with Photoprism, I had crashes, I then resolved by reducing the workers to 2. Now, I have no stability issue (but I actually don’t stress the nas that much).
Could it be that 2 SSD are faulty or at least the sensor is reporting wrong data?

here’s the full report for that SSD (note this line: Under/Over Temperature Limit Count: 0/0 despite the max temp is set to 100°) :

Blockquote
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WD Red SA500 2.5 2TB
Serial Number: 2413B54A1808
LU WWN Device Id: 5 001b44 4a55a00ec
Firmware Version: 540400WD
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available, deterministic, zeroed
Device is: Not in smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri Oct 18 15:54:52 2024 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Disabled
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Unknown
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x80) Offline data collection activity
was never started.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 0) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0002) Does not save SMART data before
entering power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 2) minutes.
Conveyance self-test routine
recommended polling time: ( 3) minutes.
SCT capabilities: (0x0035) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
5 Reallocated_Sector_Ct -O–CK 100 100 000 - 0
9 Power_On_Hours -O–CK 100 100 000 - 118
12 Power_Cycle_Count -O–CK 100 100 000 - 10
165 Unknown_Attribute -O–CK 100 100 000 - 1
166 Unknown_Attribute -O–CK 100 100 000 - 1
167 Unknown_Attribute -O–CK 100 100 000 - 63
168 Unknown_Attribute -O–CK 100 100 000 - 2
170 Unknown_Attribute -O–CK 100 100 000 - 0
171 Unknown_Attribute -O–CK 100 100 000 - 0
172 Unknown_Attribute -O–CK 100 100 000 - 0
173 Unknown_Attribute -O–CK 100 100 000 - 1
174 Unknown_Attribute -O–CK 100 100 000 - 9
187 Reported_Uncorrect -O–CK 100 100 000 - 0
188 Command_Timeout -O–CK 100 100 000 - 0
194 Temperature_Celsius -O–CK 100 100 000 - 30 (Min/Max 16/106)
199 UDMA_CRC_Error_Count -O–CK 100 100 000 - 2
230 Unknown_SSD_Attribute -O–CK 100 100 000 - 0
232 Available_Reservd_Space PO–CK 100 100 001 - 100
233 Media_Wearout_Indicator -O–CK 100 100 000 - 2136
234 Unknown_Attribute -O–CK 100 100 000 - 13
241 Total_LBAs_Written -O–CK 100 100 000 - 342
242 Total_LBAs_Read -O–CK 100 100 000 - 173
244 Unknown_Attribute -O–CK 100 100 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL,SL R/O 8 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x09 SL R/W 1 Selective self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 88 Current Device Internal Status Data log
0x25 GPL R/O 64 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Error Log Version: 1
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Completed [00% left] (0-65535)
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 1 (0x0001)
Device State: Active (0)
Current Temperature: 30 Celsius
Power Cycle Min/Max Temperature: 21/51 Celsius
Lifetime Min/Max Temperature: 16/106 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/100 Celsius
Min/Max Temperature Limit: 0/100 Celsius
Temperature History Size (Index): 128 (31)
Index Estimated Time Temperature Celsius
32 2024-10-18 13:47 31 ************
33 2024-10-18 13:48 30 ***********
34 2024-10-18 13:49 31 ************
35 2024-10-18 13:50 30 ***********
36 2024-10-18 13:51 30 ***********
37 2024-10-18 13:52 31 ************
38 2024-10-18 13:53 31 ************
39 2024-10-18 13:54 30 ***********
40 2024-10-18 13:55 31 ************
41 2024-10-18 13:56 31 ************
42 2024-10-18 13:57 30 ***********
43 2024-10-18 13:58 30 ***********
44 2024-10-18 13:59 30 ***********
45 2024-10-18 14:00 31 ************
46 2024-10-18 14:01 30 ***********
47 2024-10-18 14:02 30 ***********
48 2024-10-18 14:03 31 ************
49 2024-10-18 14:04 31 ************
50 2024-10-18 14:05 30 ***********
51 2024-10-18 14:06 30 ***********
52 2024-10-18 14:07 31 ************
53 2024-10-18 14:08 30 ***********
54 2024-10-18 14:09 31 ************
55 2024-10-18 14:10 31 ************
56 2024-10-18 14:11 30 ***********
57 2024-10-18 14:12 31 ************
58 2024-10-18 14:13 30 ***********
59 2024-10-18 14:14 31 ************
60 2024-10-18 14:15 30 ***********
61 2024-10-18 14:16 31 ************
62 2024-10-18 14:17 31 ************
63 2024-10-18 14:18 30 ***********
64 2024-10-18 14:19 30 ***********
65 2024-10-18 14:20 32 *************
66 2024-10-18 14:21 34 ***************
67 2024-10-18 14:22 31 ************
68 2024-10-18 14:23 35 ****************
69 2024-10-18 14:24 31 ************
70 2024-10-18 14:25 34 ***************
71 2024-10-18 14:26 33 **************
72 2024-10-18 14:27 32 *************
73 2024-10-18 14:28 36 *****************
74 2024-10-18 14:29 32 *************
75 2024-10-18 14:30 34 ***************
76 2024-10-18 14:31 32 *************
77 2024-10-18 14:32 33 **************
78 2024-10-18 14:33 34 ***************
79 2024-10-18 14:34 35 ****************
80 2024-10-18 14:35 41 **********************
81 2024-10-18 14:36 33 **************
82 2024-10-18 14:37 41 **********************
83 2024-10-18 14:38 33 **************
84 2024-10-18 14:39 35 ****************
85 2024-10-18 14:40 33 **************
86 2024-10-18 14:41 32 *************
87 2024-10-18 14:42 32 *************
88 2024-10-18 14:43 31 ************
89 2024-10-18 14:44 30 ***********
90 2024-10-18 14:45 32 *************
91 2024-10-18 14:46 30 ***********
92 2024-10-18 14:47 31 ************
93 2024-10-18 14:48 32 *************
94 2024-10-18 14:49 30 ***********
95 2024-10-18 14:50 30 ***********
96 2024-10-18 14:51 31 ************
97 2024-10-18 14:52 32 *************
98 2024-10-18 14:53 31 ************
99 2024-10-18 14:54 31 ************
100 2024-10-18 14:55 31 ************
101 2024-10-18 14:56 32 *************
102 2024-10-18 14:57 32 *************
103 2024-10-18 14:58 31 ************
104 2024-10-18 14:59 33 **************
105 2024-10-18 15:00 31 ************
106 2024-10-18 15:01 31 ************
107 2024-10-18 15:02 31 ************
108 2024-10-18 15:03 32 *************
109 2024-10-18 15:04 30 ***********
110 2024-10-18 15:05 33 **************
111 2024-10-18 15:06 30 ***********
112 2024-10-18 15:07 30 ***********
113 2024-10-18 15:08 32 *************
114 2024-10-18 15:09 31 ************
115 2024-10-18 15:10 32 *************
116 2024-10-18 15:11 35 ****************
117 2024-10-18 15:12 32 *************
118 2024-10-18 15:13 32 *************
119 2024-10-18 15:14 31 ************
120 2024-10-18 15:15 32 *************
121 2024-10-18 15:16 31 ************
122 2024-10-18 15:17 31 ************
123 2024-10-18 15:18 31 ************
124 2024-10-18 15:19 30 ***********
125 2024-10-18 15:20 31 ************
126 2024-10-18 15:21 31 ************
127 2024-10-18 15:22 31 ************
0 2024-10-18 15:23 30 ***********
1 2024-10-18 15:24 31 ************
2 2024-10-18 15:25 30 ***********
3 2024-10-18 15:26 31 ************
4 2024-10-18 15:27 31 ************
5 2024-10-18 15:28 31 ************
6 2024-10-18 15:29 30 ***********
7 2024-10-18 15:30 31 ************
… …( 3 skipped). … ************
11 2024-10-18 15:34 31 ************
12 2024-10-18 15:35 30 ***********
13 2024-10-18 15:36 31 ************
14 2024-10-18 15:37 30 ***********
… …( 3 skipped). … ***********
18 2024-10-18 15:41 30 ***********
19 2024-10-18 15:42 31 ************
20 2024-10-18 15:43 30 ***********
21 2024-10-18 15:44 31 ************
22 2024-10-18 15:45 31 ************
23 2024-10-18 15:46 31 ************
24 2024-10-18 15:47 30 ***********
25 2024-10-18 15:48 30 ***********
26 2024-10-18 15:49 31 ************
27 2024-10-18 15:50 31 ************
28 2024-10-18 15:51 31 ************
29 2024-10-18 15:52 30 ***********
30 2024-10-18 15:53 31 ************
31 2024-10-18 15:54 31 ************
SCT Error Recovery Control command not supported
Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x01 ===== = = === == General Statistics (rev 1) ==
0x01 0x008 4 10 — Lifetime Power-On Resets
0x01 0x010 4 118 — Power-on Hours
0x01 0x018 6 718443048 — Logical Sectors Written
0x01 0x020 6 8115699 — Number of Write Commands
0x01 0x028 6 364104578 — Logical Sectors Read
0x01 0x030 6 3942405 — Number of Read Commands
0x01 0x038 6 426795407 — Date and Time TimeStamp
0x04 ===== = = === == General Errors Statistics (rev 1) ==
0x04 0x008 4 0 — Number of Reported Uncorrectable Errors
0x04 0x010 4 11 — Resets Between Cmd Acceptance and Completion
0x05 ===== = = === == Temperature Statistics (rev 1) ==
0x05 0x008 1 31 — Current Temperature
0x05 0x010 1 31 — Average Short Term Temperature
0x05 0x018 1 31 — Average Long Term Temperature
0x05 0x020 1 106 — Highest Temperature
0x05 0x028 1 16 — Lowest Temperature
0x05 0x030 1 35 — Highest Average Short Term Temperature
0x05 0x038 1 0 — Lowest Average Short Term Temperature
0x05 0x040 1 35 — Highest Average Long Term Temperature
0x05 0x048 1 0 — Lowest Average Long Term Temperature
0x05 0x050 4 0 — Time in Over-Temperature
0x05 0x058 1 65 — Specified Maximum Operating Temperature
0x05 0x060 4 0 — Time in Under-Temperature
0x05 0x068 1 0 — Specified Minimum Operating Temperature
0x06 ===== = = === == Transport Statistics (rev 1) ==
0x06 0x008 4 1070 — Number of Hardware Resets
0x06 0x010 4 8 — Number of ASR Events
0x06 0x018 4 2 — Number of Interface CRC Errors
0x07 ===== = = === == Solid State Device Statistics (rev 1) ==
0x07 0x008 1 0 — Percentage Used Endurance Indicator
|||_ C monitored condition met
||__ D supports DSN
|___ N normalized value
Pending Defects log (GP Log 0x0c) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 0 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 6 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0010 2 0 R_ERR response for host-to-device data FIS, non-CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x0013 2 0 R_ERR response for host-to-device non-data FIS, non-CRC

Thank you all!

I’m new here, but have decades of computer experience…

SSDs are supposed to be installed in a cooling frame. The more they’re used, the more heat is generated. Flooding electricity through the electronics inside an SSD is always going to have a by-product, that being heat. The more the SSD is used, the warmer it can get.

Just having fans pointed toward SSDs isn’t good enough. They need the cooling fins to dissipate all that heat. If you don’t have them in a cooling frame, then you need to… Seriously, put those SSDs in a cooling frame. I haven’t seen them for a while, but Western Digital used to make a top notch aluminum SSD frame that dissipated heat better than any other device I’ve seen so far. That’s for the 2.5" SSDs. There are a bunch of creative cooling solutions available for M.2 type drives, Even some with cooling fins and heat pipes…

Cooling fins and/or enclosures should keep the heat down. 100C is definitely too hot…

Thanks, I agree with you but then why only two disks behave like this?

If an SSD isn’t getting used, it’s not generating heat. If there’s heat, the drive is getting hammered (or lacks sufficient cooling)… Are they cache drives? You get the idea…

What is this “holder”?

Here is a thread I made on cooling m.2 ssds

This is a good SSD cooling tray. I believe this one was made by Western Digital a few years ago. It’s in an actively working system now. Temps are always cool to the touch. This is a well-maintained system that’s cleaned and vacuumed and blown out with air, regularly…

All 2 TB are on Z2 pool with NO cache. That’s why I was wondering. But I must admit I don’t know how the ZFS works, maybe those two disks are responsible for something in particular.
Long story short, I will use my thermal camera and check.

Thanks

It’s a cage actually, one fan points directly to it, another pulls air away.
But the case of the SSD is plastic, let’s see what I can do :wink:

Strange problem, what temperature do you get?

If you look at the data included in the top of this thread, it shows temps over 100c several times, including up to 106c! Very unhealthy for SSDs…

I wonder if you can take a look where those hot drives are in the stack. I wonder if one or both of the hotties is in an area where there is less airflow. Map it out.

Id also suggest taking a look at the stack with an IR camera (many libraries now rent them out) to see what the actual heat distribution is.

I had massive issues with the top two drives in my MiniXL since the 8-high bay provided very little opportunity for cooling and only one fan was dedicated to pulling air through the little space between drives. Other users have also mentioned similar issues with that hardware.

However, 106*C sounds extreme. Hence the suggestion to review with a camera and if that is out of reach a IR meter like the mechanics use to gage surface temperature.

Will do exactly that in one week. Will let you know

Also, verify that the fans in question actually work, if you haven’t already done so.

Over 100 degrees is toasty. Do you have access to an IR-camera or temperature measuring tool to double-check if it’s a reporting issue?

Fans work fine, yes, as I mentioned, I will test with my thermal camera. But only in 1 week.

Touch them to verify they’re hot and not mis-reporting temps.

1 Like

SSD’s by themselves, don’t shed heat well. They’re just a plastic and alloy shell, protecting the chips and circuitboard inside. The case is used for cooling, but it in itself is rarely enough. You need either a cooling enclosure, or an active cooling solution. At the very least, get some heat dissipating fins and put them on the underside of the SSD. The underside is where most of the chips are bonded to the case with cooling pads…

Alright, sorry for the delay but I had to make sure the finding was correct.
Let’s keep it short:
After analyzing each disk, I found that the two hitting 100+° were yes the same model as the other, but with a different manufacturing date.
Unplugged both disks, ran some tests on windows and had random issues like drop in speed due to high temp (monitored with thermal camera).
RMA both units and got 2 replacements. Now I don’t see those temps anymore :slight_smile:

One interesting thing: the top part of the SSD shell is actually metal, but there is NO thermal pad between the controller and the shell. Awkward choice?

One last question: why using the command “smartctl” I see those temps while TN does not get even close to those values? Is it just the polling rate different?

1 Like

Funny enough, there’s still something VERY wrong with this system. I just tried to move files from the same DS (literally rsync -a --no-perms DS1/Folder1/ DS1/folder2 ) with about 100 GB of files and it crashed, again.

I’m just a bit confused.

For those still interested in this topic. I finally found the problem:
Don’t ask me why (I have an open case with ASUS) but my RAM was running overclocked. Even during memtest, but there it was stable.
I switched to manual mode, forced the native speed, and now truenas doesn’t crash anymore (or at least I think so, moving, copying and indexing is working perfectly).
I reverted back to “auto”, and it crashed during a copy…

Well… I guess that was the problem then?!

2 Likes

I’m glad you were able to find the source of the problem as overclocked memory (as delivered by an OEM!) wouldn’t have been on my list of suspects either. It makes me wonder if other aspects are also overclocked. I’d carefully go over every BIOS setting to see if a manual revision makes more sense vs. AUTO.