This maintenance release includes refinement and fixes for issues discovered or outstanding after the 24.10.1 release.
Do not retrieve hidden zpool properties in py-libzfs by default (NAS-132988). These properties include name, tname, maxblocksize, maxdnodesize, dedupditto and dedupcached. Users needing these properties can see the linked ticket for the zpool command to retrieve them.
New cloud backup option: Use Absolute Paths (NAS-132920).
Fix loading the nvidia_drm kernel module to populate the /dev/dri directory for NVIDIA GPU availability in apps like Plex (NAS-133250).
Fix netbiosname validation logic if AD enabled (NAS-133167).
Disallow specifying SSH credentials when rsync mode is MODULE (NAS-132874 and NAS-132928).
Simplify CPU widget logic to fix reporting issues for CPUs that have performance and efficiency cores (NAS-133128).
Properly support OCI image manifest for registries other than Docker (NAS-133046).
Remove explicit calls to the syslog.syslog module (NAS-132657).
Fix an ACL Editor Group/User Search Bug (NAS-131841).
Prevent infinite recursion on corrupted databases when deleting network interfaces (NAS-132567).
Clean up FTP banner to prevent Reolink camera failures (NAS-132701).
Refresh cloud sync credentials even if cloud sync task fails (NAS-132851).
That fixed it? I would have been curious to know what errors you got on boot-loop. Please make sure you aren’t seeing any checksum errors on your boot media as well.
Yes it did, i didn’t see any errors, the loop occured so fast sometimes my monitor didn’t have time to display it. I tried all the usual grub interrupt keys and nothing worked.
I’m still getting the following from nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
and still unable to use NVIDIA drivers. Tried uninstalling and reinstalling the drivers via the web interface as well as rebooting multiple times but still no success.
No it was not working before either (v24.10.1).
It worked for a brief period of time immediately after installing/re-installing the boot-pool; however, after a reboot the NVIDIA drivers stopped working and never come back to a working state and nvidia-smi has been outputting that ever since. I tried with the following nvidia GPUs: 1060,3080,3090 they work perfectly fine on windows and they worked immediately after installing the boot-pool but after the reboot they stopped working.
If you start a new thread/post in General change; with any history on boot pool,
if no-one can work out the issue, we can report a bug and have some engineering review.
QA didn’t fail. It was an intentional change and is listed in the first post in this thread.
Simplify CPU widget logic to fix reporting issues for CPUs that have performance and efficiency cores