TrueNAS 25.04-BETA.1 is Now Available!

iXsystems is pleased to release TrueNAS 25.04-BETA.1!

:warning: Early Release Software

Early releases of a major version are intended for testing and feedback purposes only. Do not use early-release software for critical tasks.

This first public release version of TrueNAS 25.04 (Fangtooth) has software component updates and new features in the polishing phase.

Notable changes

  • The TrueNAS REST API is deprecated in TrueNAS 25.04 and replaced with a versioned JSON-RPC 2.0 over WebSocket API (API Reference). Full removal of the REST API is planned for a future release.
  • Improved API key mechanism with support for user-linked API keys (NAS-131396).
  • UI login experience improvements (NAS-130810).
  • NFS over RDMA support - Enterprise Feature (NAS-131784).
  • iSCSI Extensions for RDMA (iSER) support - Enterprise Feature (NAS-106190).
  • ZFS Fast deduplication support (NAS-127088).
  • iSCSI XCOPY support through ZVOL block cloning (NAS-130017).
  • Incus Container & VM Support - Experimental Community Feature (NAS-130251).
  • Hide SED related options in the UI for non-Enterprise users (NAS-133442).
  • Bump nvidia driver version (NAS-133575).
  • Remove integrated Netdata web portal from the TrueNAS UI and middleware (NAS-133629). Default Netdata integration is removed due to STIG security requirements. Users who want to continue using Netdata monitoring can install Netdata from the TrueNAS Apps catalog.
  • Bugfix: “Cache and Spare disks are not recognized post upgrade from 13.0 U6.2 to 24.04.2” (NAS-130825).
  • Bugfix: “Unable to start a VM due to insufficient memory” (NAS-128544).

See the Release Notes for more details.

Changelog: https://www.truenas.com/docs/scale/25.04/gettingstarted/scalereleasenotes/#2504-beta1
Download : https://www.truenas.com/download-truenas-scale
Documentation : https://www.truenas.com/docs/scale/25.04

If you find a bug, please create a ticket at https://ixsystems.atlassian.net/jira/software/c/projects/NAS/issues

Thanks for testing this early release of TrueNAS Fangtooth! As always, we appreciate your feedback!

3 Likes

I always thought REST was the latest and most modern. What reason is there to develop an alternative to REST?

I didn’t realize that REST is supposedly no longer up to date. What is the big difference between REST and “JSON-RPC 2.0 over WebSocket API”? Thanks for any explanations. I’m a bit surprised.

Explained in podcast. https://youtu.be/nTU6Xechrk0?t=791
Basically, WebSocket based API is better when you have a lot of requests.

REST architecture is of course nice and good but everything has tradeoffs.

↓↓↓ And more detailes down below from @kris, looks like I was second faster :innocent:

1 Like

We went over it on the T3 Podcast last week, but the TLDR; is that Websockets are the better way to handle live, async and event-driven interactive data exchanges between the browser and backend system. REST forces you to fall back to polling, authentication with each request, etc. Expensive :slight_smile:

(Ha, you beat me too it @Foxtrot314 )

2 Likes

Perfect, a beta build!! I was getting tired of things working perfectly, updating now!
It’s no fun having a lab where nothing is breaking…

UPDATE:
Nice and smooth upgrade!


Looks like my CPU is running nice and cool too

3 Likes

We have midclt that can be installed on your PC to send API calls. Here is the repository: GitHub - truenas/api_client. In the readme, you can find out how to use it from A PC. Also, the library can bu used to automate WebSocket API calls with Python.

Got an SMB error about a user. Let me know what detail I can get you. TrueNAS - Issues - iXsystems TrueNAS Jira

Also have immediately run into a validation error after creating an instance - despite it having created and started successfully. Raising a bug report now :slight_smile:

EDIT: Any idea why this was closed as duplicate? Nevermind, figured it out.

No DirectIO yet, eh. I can wait. I can wait.

Yea, not something we want to turn on yet. Upstream still sorting some issues out. Likely in the fall release though.

1 Like
  1. Tried to recreate my VMs now, but the old Spice ports still seem to be occupied by the old VM setup. Can they be deleted somewhere? Via CLI?

  2. Also, is it no longer possible to set the MAC address for a network adapter on a VM? This was possible with the old system at least. Maybe via CLI?
    (incus config device set WindowsServer eth0 hwaddr 6e:b1:f1:ec:51:12)

  3. Also, when adding multiple NICs, you can’t see what VLAN they are:

  4. Lastly, I have all ISOs for VMs stored on my TrueNAS box (in an ISOs share). Before, you could select ISOs from the system itself, but now I always have to copy them to my machine, just to upload them through the web UI.

Just selecting an existing ZVOL for Windows (Server) VMs also doesn’t work unfortunately. They’re no longer booting. This VM was set up with the “VirtIO” disk type before (not AHCI).

Since the disk type can no longer be changed (or possible via CLI?), what is the new disk type that’s used by Incus/KVM? VirtIO, AHCI, …?
(I’ll dig into underlying TN/Incus config files later, EDIT: Incus uses virtio-scsi)

Reference screenshot for Windows Server VM not booting (disk type VirtIO before):


I’ll figure out how to fix this VM, but I think there should be steps detailed. Not everyone wants (or can) set up VMs from scratch again (especially Windows VMs).

EDIT 1: was able to boot the VM after incus config device set WindowsServer ix_virt_zvol_root io.bus=nvme. Will try to ensure Windows loads the virtio-scsi drivers now and unset the io.bus to default.

EDIT 2: Since a 10 GiB root disk is attached that’s still virtio-scsi, you can just boot windows with io.bus=nvme once. Since the other virtual disk is also attached, Windows will load the virtio-scsi drivers during next boot, so you can just incus config device unset WindowsServer ix_virt_zvol_root io.bus again and the VM will keep working (with virtio-scsi).

So, after going through the TrueNAS instance creation wizard and selecting “VM” and a ZVOL with a Windows VM set up with 24.10, Incus has both the configured ZVOL attached as ix_virt_zvol_root and a 10 GiB root disk (path: /).

I’ve not used Incus/LXC before, so it’s not obvious to me what the 10 GiB disk is also needed for in VMs?
It’s not showing in the TrueNAS UI, but is showing in the incus CLI and the (Windows) VM. Is this even used for VMs, or only for containers?

I think to be safe I would install all virtio drivers virtio-win-pkg-scripts/README.md at master · virtio-win/virtio-win-pkg-scripts · GitHub

You say your original VM had VirtIO disk type setting? So you already had to install virtio drivers for it to work? I am just surprised it was basically virtio before but still didnt work.

1 Like

Good to note that for new Windows VM installs you can use distrobuilder to repack the Windows ISO with virtio drivers automagically (distrobuilder repack-windows win11-iot-enterprise.iso win11-iot-enterprise-repacked.iso).
First time interacting with LXC/Incus so nice to see they’d built something like this.

2 Likes

Yeah, everything VirtIO related was already installed on that VM, but I’ll still reinstall it anyway.
Windows just doesn’t load “unnecessary” drivers during boot, so I guess the old virtio disk type doesn’t match the Incus virtio-scsi type exactly (at least they use different drivers on the Windows side apparently).

That’s why the io.bus had to be changed to nvme once (those drivers are always loaded during boot). After it’s booted with nvme, Windows detected the second 10 GiB disk and loaded virtio-scsi drivers. That will automatically cause Windows to load them during next boot though. So, I was able to unset io.bus again.

I know about nvme bus because it was discussed on Incus forum that its possible to use nvme bus to install Windows 11 without needing virtio drivers. Because Windows 11 installer has nvme drivers included.
This wasnt possible with Windows 10 and earlier.

But I didnt think it could help in this case. Great that you thought of that.

1 Like

Having some issues with the new “Instances” (the new VM section). After creating some virtual machines, I now suddenly get this “Validation error” when navigating to the Instances tab, and all my VMs disappeared (they are still running, though):

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 310, in process_method_call
    result = await method.call(app, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 40, in call
    result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 877, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 686, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 174, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/virt/instance.py", line 133, in query
    'netmask': int(address['netmask']),
               ^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: ''

This happened after I removed a NIC from a machine (macvlan) and added a new one (bridge).

Building a quick script to backport the fix in RC1. Will update in 10min or so once I’ve got it figured out.

Oh so this is a known issue?

Thank you!