Attribute 202: Percentage Lifetime Remaining (Percentage Lifetime Used on PCIe) This attribute is exactly as its name implies. It is a measure of how much of the drive’s projected lifetime is remaining at any point in time. When the SSD is brand new, Attribute 202 will report “100”, and when its specified lifetime has been reached, it will show “0,” reporting that 0 percent of the lifetime remains.
This attribute is also presented as “Percentage Lifetime Used” on certain legacy Crucial SSDs, as well as NVMe models, and works similarly to Lifetime Remaining, only in reverse. The new SSD’s Attribute 202 will report “0”, and when its specified lifetime has been reached, it will show “100,” reporting that 100 percent of the lifetime has been used. On these models, the percentage can exceed 100 as more write operations are done, but the data retention concerns are the same.
As you can see from the serial above, this drive was manufactured in 2025/04, and the SMART reports the “202 Percent_Lifetime_Remain” value as “0” which would indicate, according to their website, that the drive has no life left. However, it makes more sense that this new drive is reporting “0” that its a new drive, which it is. Is this drive using the wrong descriptor for field 202 or am I losing my mind?
Anyone use Crucial SSDs in ther pools? Has anyone figured this out?
Please post the entire output of smarctl -x /dev/??? where ???=the Device ID. This includes the command you typed in. It allows me to not assume you did it properly. It is easy for me to mean one thing and you to read it and interpret it differently.
Place that output into the preformatted text </> icon above when you respond. I need to see the entire piece of data to make a proper call on this.
I just replaced my boot drives that had sub 10% life remaining with Crucial CT500s. The ‘100’ values will decrease as life goes down (that is how it worked on my WD Blues). Here is my output for brand new crucials that I hot swapped in 5 minutes ago after updating the firmware:
@FrankWard The output, was that using -x and not -a ?
I ask because it seems to be a bit short.
You also did not include the command you typed for the smart data, which I did ask for. it’s the little things that matter.
Off the cuff, her is what I see with the data provided:
ID 5 = 0
ID 181 = 45
ID 202 = 0
Items 1 and 2 tell me that no reserved blocks of memory have been used. If you were really near zero for ID 202, I would expect to see some movement on the other two values.
You do not have any SMART tests run. Since it is a SSD, run smartctl -t long /dev/???
If you think you bought a fake drive or you need to report the software issue, you should reach out to Crucial.
No, once again - once the value starts going down there’ll be a hex value generated at the end. The actual 100 100 will change as the remaining life goes down. See the example with the WD Blue ssd.
That is true, You have the VALUE, WORST, and THRESH.
VALUE=Current value
WORST=The lowest value in this specific case
THRESH=The value at which is considered bad news.
All of that indicates that the RAW value should be reflective of the WORST value, assuming the same scale. This value could be inverted but Percent Life Remaining should be a count down in the RAW value as well. Just what I have observed.
Nope, your drive is not dead but it also properly displays a RAW value of 98, not 0 or 2. I am certain there is a firmware issue, it may not be an operational problem but it is a strange indication.
I’m not sure if there is a firmware update for the drive in question but I’d at least contact customer support and raise the question. You might see new replacement drives show up at your door.
Agreed - but how the Raw Value is reported is also questionable. Just look at my output from WD’s Raw Value. Wtf does that hex translate to & how does it correlate to ‘009’? If I translate it to decimal, then I get a random string of number, that I guess could maybe represent individual wear levels of NAND groups? That is my best guess at least…
Anyway, that argument is to point out that it may be too early to see how the RAW Value gets reported vs VALUE/WORST.
Sadly I did update the firmware on both drives prior to deployment. But in short I’m arguing that this may be a ‘feature’ not a flaw.
I vote flaw. I don’t think the drive is bad and going to die soon.
Regardless I still feel the OP should bring it up to the company. I did not see a firmware update but the contact may trigger an update. If they renamed the title from “Remain” to “Used”, that would make it technically correct as well.
This is why I’m curious what the extended data shows.
That is a good guess. Some mysteries will never be solved. But if you want a better answer, you need to examine all the smart data. That one piece alone is not enough to me. It is likely some algorithm, I doubt it can be converted directly into something we can understand.
My guess (since you know I want to give it) is the wear level based on how many erase cycles each block has performed. Again, some algorithm where a value of 1 may mean only 100 full erase cycles remain as an example.
OR, it is a simple countdown based on TBW. Once you hit the TBW value that the drive is warrantied for, it counts down to “0”. Zero does not mean failure, but it could depending on the manufacturer. Hey they want you to buy more to pay the bills.
Sadly I only saved the 1 line since it was the only one that seemed important at the time
That being said I follow your logic too in regards to the Raw Value, and also that the drive isn’t in immediate danger. I guess I was arguing because I’m in the same boat & I got too much going on to write to Crucial for a consumer drive. Enterprise generally get much less… basic responses.
I take this little banter as two people sharing different views, nothing more.
I recall my first Crucial firmware issue… it was many years ago (very many) and I bought my first SSD. Thankfully I purchased it about 6 months after it was released as the problem with it was fixed shortly after.
The problem… After the power on time reached a specific value (I don’t recall), the drive would become write protected. You would have to cycle power to the drive to get 1 more hour of use out of it. This problem affected a lot of SSDs. Crucial generated a firmware fix, you had to cycle power and then apply the fix.
That was the only and last Crucial SSD I have ever purchased. I think it was an MX500, something like that. I may still have it laying around.
Well time for this person to relax for the evening and get off the computer.
I feel it is the controller not the flash itself. The drive will barf every now and then but power cycling revives it. It could also be that zfs puts more load on the ~8 years old consumer ssd. You never know
Thanks for all the replies. I think the drive is fine, I was just confused about the RAW value being 0 and counting up when the SMART attribute is Percent_Lifetime_Remain.
This attribute is also presented as “Percentage Lifetime Used” on certain legacy Crucial SSDs, as well as NVMe models, and works similarly to Lifetime Remaining, only in reverse. The new SSD’s Attribute 202 will report “0”, and when its specified lifetime has been reached, it will show “100,” reporting that 100 percent of the lifetime has been used. On these models, the percentage can exceed 100 as more write operations are done, but the data retention concerns are the same.
Hey Joe…
smartctl -x /dev/sdb
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.32-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron Client SSDs
Device Model: CT240BX500SSD1
Serial Number: 2504E9A19BC0
LU WWN Device Id: 5 00a075 1e9a19bc0
Firmware Version: M6CR056
User Capacity: 240,057,409,536 bytes [240 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available
Device is: In smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 4
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Jun 4 09:03:51 2025 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Unavailable
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Unavailable
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 120) seconds.
Offline data collection
capabilities: (0x11) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0002) Does not save SMART data before
entering power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 10) minutes.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0
5 Reallocate_NAND_Blk_Cnt -O--CK 100 100 010 - 0
9 Power_On_Hours -O--CK 100 100 000 - 243
12 Power_Cycle_Count -O--CK 100 100 000 - 5
171 Program_Fail_Count -O--CK 100 100 000 - 0
172 Erase_Fail_Count -O--CK 100 100 000 - 0
173 Ave_Block-Erase_Count -O--CK 100 100 000 - 1
174 Unexpect_Power_Loss_Ct -O--CK 100 100 000 - 4
180 Unused_Reserve_NAND_Blk PO--CK 100 100 000 - 45
183 SATA_Interfac_Downshift -O--CK 100 100 000 - 0
184 Error_Correction_Count -O--CK 100 100 000 - 0
187 Reported_Uncorrect -O--CK 100 100 000 - 0
194 Temperature_Celsius -O---K 060 054 000 - 40 (Min/Max 25/46)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_ECC_Cnt -O--CK 100 100 000 - 0
198 Offline_Uncorrectable ----CK 100 100 000 - 0
199 UDMA_CRC_Error_Count -O--CK 100 100 000 - 0
202 Percent_Lifetime_Remain ----CK 100 100 001 - 0
206 Write_Error_Rate -OSR-- 100 100 000 - 0
210 Success_RAIN_Recov_Cnt -O--CK 100 100 000 - 0
246 Total_LBAs_Written -O--CK 100 100 000 - 186792073
247 Host_Program_Page_Count -O--CK 100 100 000 - 5837252
248 FTL_Program_Page_Count -O--CK 100 100 000 - 2365440
249 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 0
250 Read_Error_Retry_Rate -O--CK 100 100 000 - 0
251 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 119913737
252 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 0
253 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 0
254 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 0
223 Unkn_CrucialMicron_Attr -O--CK 100 100 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 88 Current Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
SMART Extended Comprehensive Error Log (GP Log 0x03) not supported
SMART Error Log not supported
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
SCT Commands not supported
Device Statistics (GP/SMART Log 0x04) not supported
Pending Defects log (GP Log 0x0c) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 4 0 Command failed due to ICRC error
0x0002 4 0 R_ERR response for data FIS
0x0005 4 0 R_ERR response for non-data FIS
0x000a 4 3 Device-to-host register FISes sent due to a COMRESET
Unfortunately very limited information. Of course this is a drive from 2018 (design)… Guess Crucial didn’t like to include other data. Very hard to believe here isn’t a firmware update.