TrueNAS 25.10 “Goldeye” BETA is Available - Blog Post

Since its initial release in April 2025, TrueNAS 25.04 “Fangtooth” has unified both TrueNAS CORE and SCALE into the new Community Edition, reaching over 130,000 systems and becoming the most popular version of TrueNAS in use. Today, we’re releasing the public beta of the next version, TrueNAS 25.10 “Goldeye” for the TrueNAS Community to begin testing, evaluating, and providing their valuable feedback on.

With dozens of new features and hundreds of fixes, TrueNAS “Goldeye” Testers are encouraged to take it through its paces as it continues to be refined for its October 2025 release. Full details are in the release notes on the TrueNAS Docs site, with some of the many highlights below!

Updated Linux Kernel and NVIDIA Blackwell Support

The Linux Long-Term Support (LTS) Kernel has been updated from 6.12.15 to 6.12.33, improving hardware compatibility and addressing edge-case performance issues while offering a more reliable and stable experience.

TrueNAS 25.10 now uses the NVIDIA Open Source GPU Kernel modules with the 570.172.08 driver, adding support for the latest NVIDIA GPUs including the RTX 50-series and RTX PRO Blackwell cards. With this change, NVIDIA has removed support for several older GTX GPUs. Please consult the list of compatible GPUs on NVIDIA’s GitHub repository and review the TrueNAS Community Forum thread to determine if your card is supported with the new Open Kernel module.

ZFS 2.3.3 Adds New Tools and Performance Boosts

ZFS File Rewrite is a TrueNAS-developed extension to OpenZFS 2.3.3, allowing datasets and files to be rewritten to your ZFS pool and updated with the latest changes made to vdev layout, compression algorithm changes, and deduplication functionality. With no interruption to standard file access, this command can be used as a method of rebalancing data after vdev addition or RAIDZ expansion, and has no impact on file modification time, ownership, or permissions. Goldeye will expose this capability from the TrueNAS CLI for advanced users.

Faster Caching is enabled through Adaptive Replacement Cache (ARC) improvements, including greater parallelization of operations and eviction speeds for data that is no longer valuable to be cached in RAM. High-performance systems with multiple cores and fast NVMe devices will be able to benefit most from these improvements.

DirectIO allows file protocols to avoid the ARC if caching does not improve performance for specific datasets. By avoiding extra memory copy routines for rapid pools and allowing a method to bypass cache for client workloads that know they will read data only once, TrueNAS can further optimize the contents of ARC, improving memory bandwidth and performance for specific High-Performance Computing (HPC) use-cases.

TrueNAS Versioned API Enhances Integration Options

A new, fully-versioned, and much faster JSON-RPC 2.0 over WebSocket implementation has been introduced with TrueNAS 25.10, with documentation available at api.truenas.com. The previous REST API has been deprecated, and will be fully removed in a future TrueNAS release.

This new versioned API will allow for predictable compatibility for software integrations across TrueNAS upgrades, including a Kubernetes Container Storage Interface (CSI), VMware vSphere plugin, Incus iSCSI connectivity, and Proxmox “ZFS-over-iSCSI” plugin, among others.

With the updated API capabilities, the TrueNAS Web UI becomes more responsive, displaying more accurate and up-to-date information, with lower overhead when generating reports or querying multiple elements across processor and pool statistics. Power users can leverage the updated TrueNAS CLI integration with the new API, allowing for simpler access from text-based consoles while still maintaining the same audit controls and TrueNAS tooling.

New TrueNAS Update Advisor and Profiles

Previously, TrueNAS users receiving an update notice in the Web UI had to visit the TrueNAS Software Status page to match their user profile with newly released versions, which sometimes led to confusion.

TrueNAS 25.10 overhauls the update process, with the ability to select your “User Profile” directly in the Web UI. Select the release timing that you’re interested in – General, Early Adopter, or Developer – and you’ll only be alerted for updates once they’re moved to the matching profile on the Software Status page.

A summary of the Release Notes of the update will also be provided in the Web UI itself, highlighting the key changes in a new release, with a link to the full Release Notes for those wanting to dig deeper.

Virtualization is Cleaner

TrueNAS 25.10-BETA includes separate tabs for its two different Virtualization solutions. The experimental lightweight Linux Containers (LXC) are available under the Instances tab, with full KVM-powered Virtual Machines (VM) available under the Virtualization tab.

Both tabs in the UI have been updated to be easier to navigate, and include access to a TrueNAS-managed catalog of easy-to-deploy VM and LXC template images. The Virtualization UI includes all previous functionality, such as PCI passthrough, Secure Boot support with virtual TPM devices, as well as new methods to import VMs from popular disk formats such as VMDK and QCOW2.

Migration of VMs from both the “Virtualization” and “Instances” tab – including the experimental Instance-powered VMs created in 25.04.0 and 25.04.1 – will be supported automatically. Configurations without PCI or USB passthrough are expected to migrate without issues. Some client operating systems inside the VM may require specific configurations prior to the upgrade in order to pre-load components such as virtual storage drivers to complete boot, or network reconfiguration if MAC addresses change on virtual NICs. Users with production VMs are recommended to verify compatibility and consider delaying their upgrade to 25.10-BETA until the process has been more robustly documented.

The full release of TrueNAS 25.10 will include our “Petabyte Hypervisor” making Virtualization available on TrueNAS Enterprise appliances with High Availability (HA), offering a platform for workloads that benefit from being close to high-performance storage. The same TrueNAS Enterprise appliance can continue to provide HA storage for traditional hypervisor environments powered by VMware, Proxmox, Hyper-V, XCP-ng, or other emerging technologies.

NVMe over Fabric takes Performance to the Next Level

Just as NVMe has revolutionized locally-attached solid state drives, remote storage is ready to move beyond the limitations of the SCSI protocol with NVMe over Fabric (NVMe-oF) options, extending the benefits of NVMe beyond the local PCI bus.

TrueNAS 25.10 retains both of its existing iSCSI and Fibre Channel block storage protocols, and adds two more NVMe-oF options:

NVMe/TCP leverages a TCP transport protocol similar to iSCSI, but uses NVMe commands and queueing to remove the overhead of SCSI. NVMe/TCP is broadly supported by most client operating systems, and is available in both TrueNAS Enterprise and Community Edition.

NVMe/RDMA enables the same NVMe commands to be transmitted through the RDMA over Converged Ethernet (RoCE) protocol, resulting in performance even greater than NVMe/TCP. Due to the direct memory copy access, network switch requirements, and specific NICs necessary, NVMe/RDMA is only supported on TrueNAS Enterprise in combination with the TrueNAS F-Series hardware.

More Web UI Improvements

The Goldeye Web UI has several improvements designed to make the user’s experience better, including:

  • More logical page layouts for Storage, Networking, and Alerts
  • Improved iSCSI Wizard workflow
  • Enhanced YAML editor for custom Apps
  • More responsive statistics monitoring (CPU, pool usage)

Our legacy name and logo of “iXsystems” has been officially retired from the UI as well – we’re unified at TrueNAS in the hardware, software, and support worlds.

Enabling Pool Migrations for Apps

Another frequent request was to allow for migration of Apps between pools without requiring manual reconfiguration. We’re pleased to announce that migration of Apps between pools on the same system is now available in TrueNAS 25.10, for users who’ve either outgrown the capacity or performance available on their existing configuration.

SMART gets SMARTer

SMART, the short-form of “Self-Monitoring, Analysis, and Reporting Technology” is the monitoring system included with storage devices to check their overall health, record statistics, and predict potential failures before they occur.

TrueNAS Goldeye automates the scheduling of SMART tests on supported devices, and reduces false positives in order to prevent alert fatigue and unnecessary e-waste from premature drive replacements.

TrueNAS Enterprise Appliances get Faster

Our line of TrueNAS Enterprise appliances are already showing benefits from the early Goldeye software as well:

  • Higher capacities for both hybrid (30PB) and all-flash systems (20PB)
  • Improved (STIG) Security for Defense-grade organizations
  • Support for 400Gbps Ethernet interfaces

Additional improvements will be announced after further testing and validation. If your organization is interested in a TrueNAS appliance with existing or upcoming TrueNAS capabilities, please reach out to us, and we’ll be delighted to help you.

TrueNAS WebInstall and Dashboard

TrueNAS 25.10-BETA will also be the platform for testing the new WebInstall and Dashboard capabilities mentioned in the previous blog; however, this system is still in closed ALPHA testing. Community trials are expected to begin in late September 2025, and we’ll be excited to hear your feedback.

When Should You Migrate?

If you’re deploying a new TrueNAS system today, we recommend TrueNAS 25.04.2.1 for its maturity, Docker integration, and broad testing results. For current software recommendations to existing users, always review the Software Status page for recommendations based on your profile.

For enthusiastic testers running non-production workloads, TrueNAS 25.10 “Goldeye” is now in its BETA testing phase. Users with production workloads are advised to wait for the official RELEASE version in October. TrueNAS 25.10 will be recommended for our more conservative and Enterprise users with mission-critical needs in the first months of 2026.

2 Likes

Edit: I see 25.10-BETA.1 2025 Aug 28 - I am unsure how this system was moved from CORE to SCALE. May have been an ISO install or an upgrade. Partitioning may suggest the latter. I want to say I moved to Cobia back in the day, but it’s been a hot minute.

System see sig

It remains running after this, the only impact appears to be the failed install. From a 25.04.2.1

i386-pc GRUB target is unexpected, this is a 64-bit Intel Xeon and I’d expect x86_64

Debug at Jira

[EFAULT] Error: Command ['chroot', '/tmp/tmpwnj69jxw', 'grub-install', '--target=i386-pc', '/dev/nvme0n1'] failed with exit code 1: Installing for i386-pc platform. grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible. grub-install: error: filesystem `zfs' doesn't support blocklists.
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 515, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 560, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 174, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/update.py", line 287, in update
    await self.middleware.call('update.install', job, os.path.join(location, 'update.sqsh'), options)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1005, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 731, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 624, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/update_/install_linux.py", line 50, in install
    self.middleware.call_sync("update.install_scale", mounted, progress_callback, options)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1041, in call_sync
    return methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/update_/install.py", line 61, in install_scale
    self._execute_truenas_install(mounted, command, progress_callback)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/update_/install.py", line 92, in _execute_truenas_install
    raise CallError(result or f"Abnormal installer process termination with code {p.returncode}")
middlewared.service_exception.CallError: [EFAULT] Error: Command ['chroot', '/tmp/tmpwnj69jxw', 'grub-install', '--target=i386-pc', '/dev/nvme0n1'] failed with exit code 1: Installing for i386-pc platform.
grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible.
grub-install: error: filesystem `zfs' doesn't support blocklists.

 

Edit: I then tried going to 25.04.2.3 to make sure the system was basically sound: It is, updating initramfs and updating GRUB steps have no issue there, and the system boots. Appears to be specific to Goldeye.

Hello @HoneyBadger .The announcement does not mention the major changes to SMART. I suggest you provide more details about the changes and attach a link to a detailed introduction.

https://ixsystems.atlassian.net/browse/NAS-135020

RDMA support is limited to RoCE, and doesn’t include iWARP which doesn’t have specific switch requirements?

Trying to test using proxmox. When I add an additional disk and try to make a stripe out of it I am getting an error:

Disks have duplicate serial numbers: None (sda, sdb).

I have never had this problem in my EE or fangtooth VM in proxmox. Something unique to Goldeye here?

You have to add serial numbers to your vm config in proxmox. I ran into the same issue while running nightly. Problem seemed to have popped up again in 25.10

1 Like

ZFS File Rewrite is a TrueNAS-developed extension to OpenZFS 2.3.3, allowing datasets and files to be rewritten to your ZFS pool and updated with the latest changes made to vdev layout, compression algorithm changes, and deduplication functionality. With no interruption to standard file access, this command can be used as a method of rebalancing data after vdev addition or RAIDZ expansion, and has no impact on file modification time, ownership, or permissions. Goldeye will expose this capability from the TrueNAS CLI for advanced users.

Question re: ZFS re-write. Will this fix the storage reporting in the GUI in relation to the storage pool total capacity and space used?

No.

It will improve the storage efficiency of data that predates the expansion.

It won’t fix the faulty space reporting in zfs after expansion

2 Likes

:warning: You should also know there’s a major caveat with zfs rewrite.

Any existing snapshots that refer to the files that you target with rewrite will roughly double the space that these files consume. This is because the snapshots are referencing the old blocks before they were “rewritten” as new blocks.

If you plan to use zfs rewrite, you need to accept that you’ll have to start over with your snapshots from the current date.

This is the same limitation with “rebalancing” scripts.

EDIT: If you think mirrors are superior to RAIDZ in every possible way, please show your support with the “point up” emoji. :point_up:

11 Likes

I’ll accept mirrors are superior when they can give me an 83% effective capacity rate.

As it is, it’s cheaper to use the saved money to buy even more HDD to hold data as part of the typical backup>destroy>recreate process. And then use the left over HDD as a working mirror before turning them into more RaidZ2 vdevs later on.

1 Like

If its unique to Goldeye, please report is as a “bug”. Very much appreciate help troubleshooting these issues so we can fix.

We found a similar issue and think it’s the same: NAS137350

25.10-BETA.1 will not upgrade if there are 3 or more partitions and one is not BIOS

Does that seem to be the case for you?

@Captain_Morgan Not exactly. My report was closed as a duplicate btw - this may be an adjacent but separate issue, and ought to be re-opened. I’ll leave that to the TrueNAS’ dev team’s judgment

Edit: Ah! Aha! Yes, yes it does get confused as to where my boot drive is. middlewared.service_exception.CallError: [EFAULT] Error: Command ['chroot', '/tmp/tmpwnj69jxw', 'grub-install', '--target=i386-pc', '/dev/nvme0n1'] failed with exit code 1

/dev/nvme0n1 is not what I’m booting from. I am booting from /dev/nvme1n1.

Here are my partitions. There are two: EFI and the ZFS root/boot partition

thorsten@truenas:~$ sudo fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: Patriot Scorch M2                       
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D05004D2-207F-11ED-9851-AC1F6B7050D8

Device          Start       End   Sectors  Size Type
/dev/nvme1n1p1     40    532519    532480  260M EFI System
/dev/nvme1n1p2 532520 250060839 249528320  119G FreeBSD ZFS

There is also an nvme0n1, which holds a vdev on a pool. It appears the installer gets confused about where the boot drive is.

thorsten@truenas:~$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: INTEL SSDPEDME020T4D HHHL NVMe 2000GB   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 65FEB4EB-5169-11ED-ACBC-AC1F6B7050D8

Device           Start        End    Sectors  Size Type
/dev/nvme0n1p1     128    4194431    4194304    2G FreeBSD swap
/dev/nvme0n1p2 4194432 3907029127 3902834696  1.8T FreeBSD ZFS
2 Likes

Good to hear we correctly closed your case as a duplicate. I’ll monitor for the solution.

Well, 137350 is private. Fingers crossed that “3 partitions” and “2 partitions but wrong block device” leading to confusion have the same root cause. We shall see!

Research shows that proxmox doesn’t pass drive serial numbers on purpose to prevent issues with HA setups. There are plenty of examples of people running into this issue going back several years. I’m not sure if the use of drive serial / id has changed in TrueNAS. This issue does seem like a proxmox issue unless TrueNAS want’s to assign arbitrary serial numbers if none are provided.

Not that I know of, but if there are users that see problems when upgrading, we’d like to know.

Thanks for the diagnosis…

So far I have not had any issues with 25.10 beta 1, but for an hour after upgrading my hard drives were going nuts – what was it doing?

iWARP uses TCP/IP and is not supported

NVMe/TCP is supported and also doesn’t need specific switches.