Hello folks ^^)
there is something that seems not working on my scale nas since update to 24.10.1 (and still with 24.10.2)
the cpu temp per core widget stays desesperatly empty while the cpu usage one works fine…
and in the reports I can’t see any HDD temps reported too…
and, lastly, all HDDs don’t answer to smart anymore, making me bugged by warnings constantly… trying to test manually fail too.
beside the two updates (I made the latest to see if that would correct all this errors) and two HDD that still resilver/replace (looooooooooooooooooooooong) (see my other post on this here) no changes have been made to the hardware (controler or else)…
why this don’t work on the lasts two update to scale ???
Read Release Notes on ‘CPU’ issue. Intentional change as mentioned? Adding screenshots might help as we can’t see what you do. If it is a Bug, report it using the GUI. Smile icon on upper right for Feedback / Report A Bug.
adding link
Simplify CPU widget logic to fix reporting issues for CPUs that have performance and efficiency cores (NAS-133128).
I don’t have the same problem as in the link you provided.
Just that all this worked fine with scale versions before electric eels…
no modifs to the NAS hardware in the meantime, only upgraded scale.
I would try bug report. You could also try booting into your BIOS and see if you get temp data reported. You could also run a hardware test, if your BIOS has that feature.
You could also try making a backup of the current configuration file, do a fresh install of the current Scale and reload the configuration.
Hello
well, at the moment it is still replacing/resilvering two faulty disks, as it is doing so since near a month now I am a little reluctant to reboot it another time…
the motherboard bios is set for smart test the disks and hot swapping disks is also on for all mobo sata ports.
the last time I’ve looked at the bios (last week after updating to v24.10.2) all temps was correctly reported by it, but no a single one is shown by scale since the v24.10.1 update as shown in the screenshots.
Btw, How many years it will take to replace two 9.1 TB (10 TiB of thieves) faulty disks by two new ones ??? nearly a month running to do so seems a little tooooo long to me… at least adding a disk and expanding the usable space takes less than 5 days, way faster than replacing those faulty disks…
each time it says ‘resilver finished’ it goes on again and do it elsewhere another time.
what I don’t really understand too, is that the faulty disks are still set up and running while the replacement ones are regularly stopped and made unavaillable… why the heck ??? shouldn’t it the reverse here ? the bad ones popped out and the new taking its places ?
I’ve tried doing the replace as if the faulty disks was absolutely dead and not powering on anymore and put two new ones at its place, as it would be possible on a raid array, to have it recreated from the metadatas BUT in that case Scale refuse to load the Zpool and do nothing at all ! what’s the problem there ???
is ZFS not able to act as a real hardware raid controller card or else ?
lotsa question/concern there, hope that not too much ^^)
Your pool resilver is a bit odd. Lets look at your hard drive models and the pools
You may want to change the title to include drive reslivering or something like that. You can ask a Mod, if necessary. Trying to get more views
can you please open a shell and run the following commands and post the results for each command in a separate <> box:
lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
sudo zpool status -v
sudo zpool list
This will give us hard facts about your drives and pools which will allow us to give you precise advice (and even commands to run) to help you achieve your goal.
For info about the setup I just put a signature on now
there are disk for servers and some consumer disks as I can afford and find them…
BTW I use same disks (and even bigger ones) on my workstation and since ten years just replaced one of it that died…it are NOT in raid mode there.
so truenas should be working fine with it.
but as said it resilver it since nearly a month… btw, all know errors (there where about 1700!) are all corrected now
BTW 2 : just seen that, when I wanted to get the faulty disks replaced first I’ve popped out them and replaced it by the new ones at the same places and booted the nas… should have let the old one at their place and put the new elsewhere on the card instead ?
yeepee ! at last one of the bad disk seems to have finished been replaced/resilvered !
here you can see that task ‘replacing 5’ is gone and the old bad disk has been powered down.
zpool status
elNas[~]$ sudo zpool status -x
pool: elZ2
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Feb 21 01:25:43 2025
28.3T / 118T scanned at 1.66G/s, 27.2T / 118T issued at 1.60G/s
1.60T resilvered, 23.11% done, 16:04:30 to go
expand: expanded raidz2-0 copied 111T in 4 days 10:00:04, on Mon Jan 6 05:44:54 2025
config:
YEEPEE ! at last it finished its work ! the last bad disk is now poped out and I have a ‘brand new’ TrueNAS Scale that works fine !
Tadaaaah !
elNas[~]$ sudo zpool status
pool: elZ2
state: ONLINE
scan: resilvered 16.3G in 00:35:38 with 0 errors on Sun Feb 23 03:13:13 2025
expand: expanded raidz2-0 copied 111T in 4 days 10:00:04, on Mon Jan 6 05:44:54 2025
config: