iXsystems is pleased to release TrueNAS 25.04-BETA.1!
Early Release Software
Early releases of a major version are intended for testing and feedback purposes only. Do not use early-release software for critical tasks.
This first public release version of TrueNAS 25.04 (Fangtooth) has software component updates and new features in the polishing phase.
Notable changes
The TrueNAS REST API is deprecated in TrueNAS 25.04 and replaced with a versioned JSON-RPC 2.0 over WebSocket API (API Reference). Full removal of the REST API is planned for a future release.
Improved API key mechanism with support for user-linked API keys (NAS-131396).
Remove integrated Netdata web portal from the TrueNAS UI and middleware (NAS-133629). Default Netdata integration is removed due to STIG security requirements. Users who want to continue using Netdata monitoring can install Netdata from the TrueNAS Apps catalog.
Bugfix: “Cache and Spare disks are not recognized post upgrade from 13.0 U6.2 to 24.04.2” (NAS-130825).
Bugfix: “Unable to start a VM due to insufficient memory” (NAS-128544).
I always thought REST was the latest and most modern. What reason is there to develop an alternative to REST?
I didn’t realize that REST is supposedly no longer up to date. What is the big difference between REST and “JSON-RPC 2.0 over WebSocket API”? Thanks for any explanations. I’m a bit surprised.
We went over it on the T3 Podcast last week, but the TLDR; is that Websockets are the better way to handle live, async and event-driven interactive data exchanges between the browser and backend system. REST forces you to fall back to polling, authentication with each request, etc. Expensive
We have midclt that can be installed on your PC to send API calls. Here is the repository: GitHub - truenas/api_client. In the readme, you can find out how to use it from A PC. Also, the library can bu used to automate WebSocket API calls with Python.
Also have immediately run into a validation error after creating an instance - despite it having created and started successfully. Raising a bug report now
Also, is it no longer possible to set the MAC address for a network adapter on a VM? This was possible with the old system at least. Maybe via CLI?
(incus config device set WindowsServer eth0 hwaddr 6e:b1:f1:ec:51:12)
Also, when adding multiple NICs, you can’t see what VLAN they are:
Lastly, I have all ISOs for VMs stored on my TrueNAS box (in an ISOs share). Before, you could select ISOs from the system itself, but now I always have to copy them to my machine, just to upload them through the web UI.
Just selecting an existing ZVOL for Windows (Server) VMs also doesn’t work unfortunately. They’re no longer booting. This VM was set up with the “VirtIO” disk type before (not AHCI).
Since the disk type can no longer be changed (or possible via CLI?), what is the new disk type that’s used by Incus/KVM? VirtIO, AHCI, …?
(I’ll dig into underlying TN/Incus config files later, EDIT: Incus uses virtio-scsi)
Reference screenshot for Windows Server VM not booting (disk type VirtIO before):
I’ll figure out how to fix this VM, but I think there should be steps detailed. Not everyone wants (or can) set up VMs from scratch again (especially Windows VMs).
EDIT 1: was able to boot the VM after incus config device set WindowsServer ix_virt_zvol_root io.bus=nvme. Will try to ensure Windows loads the virtio-scsi drivers now and unset the io.bus to default.
EDIT 2: Since a 10 GiB root disk is attached that’s still virtio-scsi, you can just boot windows with io.bus=nvme once. Since the other virtual disk is also attached, Windows will load the virtio-scsi drivers during next boot, so you can just incus config device unset WindowsServer ix_virt_zvol_root io.bus again and the VM will keep working (with virtio-scsi).
So, after going through the TrueNAS instance creation wizard and selecting “VM” and a ZVOL with a Windows VM set up with 24.10, Incus has both the configured ZVOL attached as ix_virt_zvol_root and a 10 GiB root disk (path: /).
I’ve not used Incus/LXC before, so it’s not obvious to me what the 10 GiB disk is also needed for in VMs?
It’s not showing in the TrueNAS UI, but is showing in the incus CLI and the (Windows) VM. Is this even used for VMs, or only for containers?
You say your original VM had VirtIO disk type setting? So you already had to install virtio drivers for it to work? I am just surprised it was basically virtio before but still didnt work.
Good to note that for new Windows VM installs you can use distrobuilder to repack the Windows ISO with virtio drivers automagically (distrobuilder repack-windows win11-iot-enterprise.iso win11-iot-enterprise-repacked.iso).
First time interacting with LXC/Incus so nice to see they’d built something like this.
Yeah, everything VirtIO related was already installed on that VM, but I’ll still reinstall it anyway.
Windows just doesn’t load “unnecessary” drivers during boot, so I guess the old virtio disk type doesn’t match the Incus virtio-scsi type exactly (at least they use different drivers on the Windows side apparently).
That’s why the io.bus had to be changed to nvme once (those drivers are always loaded during boot). After it’s booted with nvme, Windows detected the second 10 GiB disk and loaded virtio-scsi drivers. That will automatically cause Windows to load them during next boot though. So, I was able to unset io.bus again.
I know about nvme bus because it was discussed on Incus forum that its possible to use nvme bus to install Windows 11 without needing virtio drivers. Because Windows 11 installer has nvme drivers included.
This wasnt possible with Windows 10 and earlier.
But I didnt think it could help in this case. Great that you thought of that.
Having some issues with the new “Instances” (the new VM section). After creating some virtual machines, I now suddenly get this “Validation error” when navigating to the Instances tab, and all my VMs disappeared (they are still running, though):
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 310, in process_method_call
result = await method.call(app, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 40, in call
result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 877, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 686, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 174, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/virt/instance.py", line 133, in query
'netmask': int(address['netmask']),
^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: ''
This happened after I removed a NIC from a machine (macvlan) and added a new one (bridge).