Question that bug me about temp

Hello folks ^^)
there is something that seems not working on my scale nas since update to 24.10.1 (and still with 24.10.2)

the cpu temp per core widget stays desesperatly empty while the cpu usage one works fine…

and in the reports I can’t see any HDD temps reported too…

and, lastly, all HDDs don’t answer to smart anymore, making me bugged by warnings constantly… trying to test manually fail too.

beside the two updates (I made the latest to see if that would correct all this errors) and two HDD that still resilver/replace (looooooooooooooooooooooong) (see my other post on this here) no changes have been made to the hardware (controler or else)…

why this don’t work on the lasts two update to scale ???

How (and could) I correct this ?

Thanks by advance :slight_smile:
Jeff

Read Release Notes on ‘CPU’ issue. Intentional change as mentioned? Adding screenshots might help as we can’t see what you do. If it is a Bug, report it using the GUI. Smile icon on upper right for Feedback / Report A Bug.

adding link
Simplify CPU widget logic to fix reporting issues for CPUs that have performance and efficiency cores (NAS-133128).

Hello :slight_smile: Thanks for your answer, here are
some screenshots to show the problems I mention :slight_smile: :

I don’t have the same problem as in the link you provided.
Just that all this worked fine with scale versions before electric eels…
no modifs to the NAS hardware in the meantime, only upgraded scale.

Firefox ESR v128.7.0 esr.

If necessary I will put a bug report.

Jeff

I would try bug report. You could also try booting into your BIOS and see if you get temp data reported. You could also run a hardware test, if your BIOS has that feature.

You could also try making a backup of the current configuration file, do a fresh install of the current Scale and reload the configuration.

Hello :slight_smile:
well, at the moment it is still replacing/resilvering two faulty disks, as it is doing so since near a month now I am a little reluctant to reboot it another time…

the motherboard bios is set for smart test the disks and hot swapping disks is also on for all mobo sata ports.

the last time I’ve looked at the bios (last week after updating to v24.10.2) all temps was correctly reported by it, but no a single one is shown by scale since the v24.10.1 update as shown in the screenshots.

Btw, How many years it will take to replace two 9.1 TB (10 TiB of thieves) faulty disks by two new ones ??? nearly a month running to do so seems a little tooooo long to me… at least adding a disk and expanding the usable space takes less than 5 days, way faster than replacing those faulty disks…
each time it says ‘resilver finished’ it goes on again and do it elsewhere another time.

what I don’t really understand too, is that the faulty disks are still set up and running while the replacement ones are regularly stopped and made unavaillable… why the heck ??? shouldn’t it the reverse here ? the bad ones popped out and the new taking its places ?

I’ve tried doing the replace as if the faulty disks was absolutely dead and not powering on anymore and put two new ones at its place, as it would be possible on a raid array, to have it recreated from the metadatas BUT in that case Scale refuse to load the Zpool and do nothing at all ! what’s the problem there ???
is ZFS not able to act as a real hardware raid controller card or else ?

lotsa question/concern there, hope that not too much ^^)

Your pool resilver is a bit odd. Lets look at your hard drive models and the pools
You may want to change the title to include drive reslivering or something like that. You can ask a Mod, if necessary. Trying to get more views

can you please open a shell and run the following commands and post the results for each command in a separate <> box:

lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
sudo zpool status -v
sudo zpool list

This will give us hard facts about your drives and pools which will allow us to give you precise advice (and even commands to run) to help you achieve your goal.

Well here it is :

elNas[~]$ sudo zpool status -v
  pool: ****elZ2
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Wed Feb 19 03:05:57 2025
        94.4T / 118T scanned at 1.66G/s, 93.2T / 117T issued at 1.64G/s
        2.66T resilvered, 79.46% done, 04:10:56 to go
expand: expanded raidz2-0 copied 111T in 4 days 10:00:04, on Mon Jan  6 05:44:54 2025
config:

        NAME                                        STATE     READ WRITE CKSUM
        ***elZ2                                     DEGRADED     0     0     0
          raidz2-0                                  DEGRADED     0     0     0
            aab036a6-7e2b-4632-b16d-cd7866845e5e    ONLINE       0     0     0
            7a8f8da1-836b-4a4c-9eaa-2aa29da2beb7    ONLINE       0     0     0
            a10d9ac9-ef2c-4ede-9e6b-eba01e901361    ONLINE       0     0     0
            9f2dce88-cde7-4923-b0bb-bf4de890ea52    ONLINE       0     0     0
            b74d4b85-0436-4814-8515-b405d7e40e24    ONLINE       0     0     0
            replacing-5                             ONLINE       0     0     0
              24e3a0a0-411b-4174-85e4-83eb48744f9f  ONLINE       0     0     0
              96d445ec-e8b2-42ce-8f1a-63ca963eead9  ONLINE       0     0     0  (resilvering)
            replacing-6                             DEGRADED     0     0    15
              aa40633b-42bc-41f7-b8c2-555530227841  DEGRADED     0     0     0  too many errors
              b4dbc538-a334-417c-9b6a-7265693bc710  ONLINE       0     0     0  (resilvering)
            fd7f4ef7-237a-48c0-8a72-bebaf1cd3fb5    ONLINE       0     0     0
            83e62634-a0a5-4c2f-abb8-10b763833042    ONLINE       0     0     0
            ea3b911a-d478-4341-b4cd-4549385fcdde    ONLINE       0     0     0  (resilvering)
            52c0cc19-1576-4186-b41f-8ee005e7ebaf    ONLINE       0     0     0
            ee9542ae-6266-4615-bcaa-6a0f06dafb87    ONLINE       0     0     0
            0b764a40-fd5d-4631-922d-625178717347    ONLINE       0     0     0
            e0df314c-cc1e-460e-9591-842264306d5b    ONLINE       0     0     0
            af112e1e-211c-4061-a0bf-4e7f943903ca    ONLINE       0     0     0
            c2c7b568-7eff-4f85-998b-eb4e1ac51897    ONLINE       0     0     0
        cache
          325bad3f-5323-47be-9736-e06e1f30603b      ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:05:36 with 0 errors on Mon Feb 17 03:50:38 2025
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdt3      ONLINE       0     0     0

errors: No known data errors
elNas[~]$ 

elNas[~]$ lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID

NAME   MODEL                    ROTA PTTYPE TYPE    START           SIZE PARTTYPENAME             PARTUUID
sda    HUH721010ALE601             1 gpt    disk          10000831348736                          
└─sda1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS aab036a6-7e2b-4632-b16d-cd7866845e5e
sdb    HUH721010ALE601             1 gpt    disk          10000831348736                          
└─sdb1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS a10d9ac9-ef2c-4ede-9e6b-eba01e901361
sdc    HUH721010ALE601             1 gpt    disk          10000831348736                          
└─sdc1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS b74d4b85-0436-4814-8515-b405d7e40e24
sdd    ST10000NE0008-2PL103        1 gpt    disk          10000831348736                          
└─sdd1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS 9f2dce88-cde7-4923-b0bb-bf4de890ea52
sde    ST10000NE0008-2PL103        1 gpt    disk          10000831348736                          
└─sde1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS 7a8f8da1-836b-4a4c-9eaa-2aa29da2beb7
sdf    ST10000NM0046               1 gpt    disk          10009952870400                          
└─sdf1                             1 gpt    part     2048 10009950814208 Solaris /usr & Apple ZFS b4dbc538-a334-417c-9b6a-7265693bc710
sdg    HUH721010ALE601             1 gpt    disk          10000831348736                          
└─sdg1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS ea3b911a-d478-4341-b4cd-4549385fcdde
sdh    ST10000NM0046               1 gpt    disk          10009952870400                          
└─sdh1                             1 gpt    part     2048 10009950814208 Solaris /usr & Apple ZFS 96d445ec-e8b2-42ce-8f1a-63ca963eead9
sdi    ST10000DM005-3AW101         1 gpt    disk          10000831348736                          
└─sdi1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS fd7f4ef7-237a-48c0-8a72-bebaf1cd3fb5
sdj    ST10000DM005-3AW101         1 gpt    disk          10000831348736                          
└─sdj1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS 83e62634-a0a5-4c2f-abb8-10b763833042
sdk    ST10000DM005-3AW101         1 gpt    disk          10000831348736                          
└─sdk1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS 52c0cc19-1576-4186-b41f-8ee005e7ebaf
sdl    ST10000DM005-3AW101         1 gpt    disk          10000831348736                          
└─sdl1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS ee9542ae-6266-4615-bcaa-6a0f06dafb87
sdm    ST10000DM005-3AW101         1 gpt    disk          10000831348736                          
└─sdm1                             1 gpt    part     2048 10000829251584 Solaris /usr & Apple ZFS af112e1e-211c-4061-a0bf-4e7f943903ca
sdn    WDC WD101EDBZ-11B1DA0       1 gpt    disk          10000831348736                          
└─sdn1                             1 gpt    part     2048 10000829251584 Solaris /usr & Apple ZFS c2c7b568-7eff-4f85-998b-eb4e1ac51897
sdo    WDC WDS500G1R0A-68A4W0      0 gpt    disk            500107862016                          
└─sdo1                             0 gpt    part     4096   500105747968 Solaris /usr & Apple ZFS 325bad3f-5323-47be-9736-e06e1f30603b
sdp    ST10000NM0046               1 gpt    disk          10000831348736                          
└─sdp1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS 24e3a0a0-411b-4174-85e4-83eb48744f9f
sr0    HL-DT-ST DVDRAM GH24NSD5    1        rom               1073741312                          
sdq    ST10000NM0046               1 gpt    disk          10009952870400                          
└─sdq1                             1 gpt    part     2048 10009950814208 Solaris /usr & Apple ZFS 0b764a40-fd5d-4631-922d-625178717347
sdr    ST10000NM0046               1 gpt    disk          10000831348736                          
└─sdr1                             1 gpt    part     2048 10000829251584 Solaris /usr & Apple ZFS e0df314c-cc1e-460e-9591-842264306d5b
sds    ST10000NM0046               1 gpt    disk          10000831348736                          
└─sds1                             1 gpt    part     4096 10000829234688 Solaris /usr & Apple ZFS aa40633b-42bc-41f7-b8c2-555530227841
sdt    ST3000DM001-1ER166          1 gpt    disk           3000592982016                          
├─sdt1                             1 gpt    part     4096        1048576 BIOS boot                c28d4013-73fe-49a9-a1a5-625da7d77eeb
├─sdt2                             1 gpt    part     6144      536870912 EFI System               020e5b7a-a0a9-4652-97ab-e7b3cd691ec8
├─sdt3                             1 gpt    part 34609152  2982873079296 Solaris /usr & Apple ZFS 1bd10b2f-e316-4326-b71b-106c5927adc2
└─sdt4                             1 gpt    part  1054720    17179869184 Linux swap               dfa1aa72-54ec-46e8-a749-caeaaf1d2621
elNas[~]$ 

elNas[~]$ sudo zpool list                                                       
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
***elZ2     146T   118T  27.9T        -         -     0%    80%  1.00x  DEGRADED  /mnt
boot-pool  2.70T  21.6G  2.68T        -         -     0%     0%  1.00x    ONLINE  -
elNas[~]$ 

the two disk to be replaced are :

96d445ec-e8b2-42ce-8f1a-63ca963eead9
and 
b4dbc538-a334-417c-9b6a-7265693bc710

I didn’t expect a 16 wide Raid-Z2. I think all drive models are CMR, so that rules out SMR related troubles.

Your zpool status shows resilver in progress since Wed Feb 19 and almost 80% done. I guess check it later an see if it is progressing.

I don’t know about your LSI card setup so, maybe, someone else will comment or check in on that.

For info about the setup I just put a signature on now :wink:

there are disk for servers and some consumer disks as I can afford and find them…
BTW I use same disks (and even bigger ones) on my workstation and since ten years just replaced one of it that died…it are NOT in raid mode there.

so truenas should be working fine with it.

but as said it resilver it since nearly a month… btw, all know errors (there where about 1700!) are all corrected now :slight_smile:

BTW 2 : just seen that, when I wanted to get the faulty disks replaced first I’ve popped out them and replaced it by the new ones at the same places and booted the nas… should have let the old one at their place and put the new elsewhere on the card instead ?

yeepee ! at last one of the bad disk seems to have finished been replaced/resilvered !

here you can see that task ‘replacing 5’ is gone and the old bad disk has been powered down.

zpool status

elNas[~]$ sudo zpool status -x
pool: elZ2
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Feb 21 01:25:43 2025
28.3T / 118T scanned at 1.66G/s, 27.2T / 118T issued at 1.60G/s
1.60T resilvered, 23.11% done, 16:04:30 to go
expand: expanded raidz2-0 copied 111T in 4 days 10:00:04, on Mon Jan 6 05:44:54 2025
config:

    NAME                                        STATE     READ WRITE CKSUM
   elZ2                                     DEGRADED     0     0     0
      raidz2-0                                  DEGRADED     0     0     0
        aab036a6-7e2b-4632-b16d-cd7866845e5e    ONLINE       0     0     0
        7a8f8da1-836b-4a4c-9eaa-2aa29da2beb7    ONLINE       0     0     0
        a10d9ac9-ef2c-4ede-9e6b-eba01e901361    ONLINE       0     0     0
        9f2dce88-cde7-4923-b0bb-bf4de890ea52    ONLINE       0     0     0
        b74d4b85-0436-4814-8515-b405d7e40e24    ONLINE       0     0     0
        96d445ec-e8b2-42ce-8f1a-63ca963eead9    ONLINE       0     0     0  (resilvering)
        replacing-6                             DEGRADED     0     0    15
          aa40633b-42bc-41f7-b8c2-555530227841  DEGRADED     0     0     0  too many errors
          b4dbc538-a334-417c-9b6a-7265693bc710  ONLINE       0     0     0  (resilvering)
        fd7f4ef7-237a-48c0-8a72-bebaf1cd3fb5    ONLINE       0     0     0
        83e62634-a0a5-4c2f-abb8-10b763833042    ONLINE       0     0     0
        ea3b911a-d478-4341-b4cd-4549385fcdde    ONLINE       0     0     0
        52c0cc19-1576-4186-b41f-8ee005e7ebaf    ONLINE       0     0     0
        ee9542ae-6266-4615-bcaa-6a0f06dafb87    ONLINE       0     0     0
        0b764a40-fd5d-4631-922d-625178717347    ONLINE       0     0     0
        e0df314c-cc1e-460e-9591-842264306d5b    ONLINE       0     0     0
        af112e1e-211c-4061-a0bf-4e7f943903ca    ONLINE       0     0     0
        c2c7b568-7eff-4f85-998b-eb4e1ac51897    ONLINE       0     0     0
    cache
      325bad3f-5323-47be-9736-e06e1f30603b      ONLINE       0     0     0

errors: No known data errors

let’s see how much time it will take to finish the 2nd bad one replacing ^^)

almost forgot : is there a way to look at all that resilvering stuff logfile elsewhere ?

Replying to myself :

YEEPEE ! at last it finished its work ! the last bad disk is now poped out and I have a ‘brand new’ TrueNAS Scale that works fine !

Tadaaaah !

elNas[~]$ sudo zpool status
pool: elZ2
state: ONLINE
scan: resilvered 16.3G in 00:35:38 with 0 errors on Sun Feb 23 03:13:13 2025
expand: expanded raidz2-0 copied 111T in 4 days 10:00:04, on Mon Jan 6 05:44:54 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    VoxelZ2                                   ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        aab036a6-7e2b-4632-b16d-cd7866845e5e  ONLINE       0     0     0
        7a8f8da1-836b-4a4c-9eaa-2aa29da2beb7  ONLINE       0     0     0
        a10d9ac9-ef2c-4ede-9e6b-eba01e901361  ONLINE       0     0     0
        9f2dce88-cde7-4923-b0bb-bf4de890ea52  ONLINE       0     0     0
        b74d4b85-0436-4814-8515-b405d7e40e24  ONLINE       0     0     0
        96d445ec-e8b2-42ce-8f1a-63ca963eead9  ONLINE       0     0     0
        b4dbc538-a334-417c-9b6a-7265693bc710  ONLINE       0     0     0
        fd7f4ef7-237a-48c0-8a72-bebaf1cd3fb5  ONLINE       0     0     0
        83e62634-a0a5-4c2f-abb8-10b763833042  ONLINE       0     0     0
        ea3b911a-d478-4341-b4cd-4549385fcdde  ONLINE       0     0     0
        52c0cc19-1576-4186-b41f-8ee005e7ebaf  ONLINE       0     0     0
        ee9542ae-6266-4615-bcaa-6a0f06dafb87  ONLINE       0     0     0
        0b764a40-fd5d-4631-922d-625178717347  ONLINE       0     0     0
        e0df314c-cc1e-460e-9591-842264306d5b  ONLINE       0     0     0
        af112e1e-211c-4061-a0bf-4e7f943903ca  ONLINE       0     0     0
        c2c7b568-7eff-4f85-998b-eb4e1ac51897  ONLINE       0     0     0
    cache
      325bad3f-5323-47be-9736-e06e1f30603b    ONLINE       0     0     0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:05:36 with 0 errors on Mon Feb 17 03:50:38 2025
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool   ONLINE       0     0     0
      sdt3      ONLINE       0     0     0

errors: No known data errors

took very long but I’m more than happy today :slight_smile: :confetti_ball: :tada:

TY for the hard work and that excellent TrueNAS software :slight_smile:
Jef

2 Likes