NFS service not starting after upgrading Core to Scale

Yesterday I manually upgraded Core 13.0-U6.2 to Scale 24.04.2.3 via GUI, everything works fine with SMB shares, pool access etc, but not NFS shares that previous worked fine on Core, followed the “Preparing to Migrate” documentation. I need some help here.

When I try to start the NFS service in Scale, following error:

[EFAULT] Oct 29 08:30:39 systemd[1]: Starting nfs-server.service - NFS server and services… Oct 29 08:30:39 rpc.nfsd[69966]: rpc.nfsd: unable to bind AF_INET TCP socket: errno 99 (Cannot assign requested address) Oct 29 08:30:39 rpc.nfsd[69966]: rpc.nfsd: unable to set any sockets for nfsd Oct 29 08:30:39 systemd[1]: nfs-server.service: Main process exited, code=exited, status=1/FAILURE Oct 29 08:30:39 systemd[1]: nfs-server.service: Failed with result ‘exit-code’. Oct 29 08:30:39 systemd[1]: Stopped nfs-server.service - NFS server and services.

Error: Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 198, in call_method
result = await self.middleware.call_with_audit(message[‘method’], serviceobj, methodobj, params, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1466, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1417, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 187, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 47, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/service.py”, line 205, in start
raise CallError(await service_object.failure_logs() or ‘Service not running after start’)
middlewared.service_exception.CallError: [EFAULT] Oct 29 08:27:17 systemd[1]: Starting nfs-server.service - NFS server and services…
Oct 29 08:27:17 rpc.nfsd[69751]: rpc.nfsd: unable to bind AF_INET TCP socket: errno 99 (Cannot assign requested address)
Oct 29 08:27:17 rpc.nfsd[69751]: rpc.nfsd: unable to set any sockets for nfsd
Oct 29 08:27:17 systemd[1]: nfs-server.service: Main process exited, code=exited, status=1/FAILURE
Oct 29 08:27:17 systemd[1]: nfs-server.service: Failed with result ‘exit-code’.
Oct 29 08:27:17 systemd[1]: Stopped nfs-server.service - NFS server and services.

System:
OS Version:TrueNAS-SCALE-24.04.2.3
Product:X10SLL-F
Model:Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
Memory:31 GiB

I just ran into exactly this with my migration from Core to Scale. Been googling it, and looking at release notes. So far, only this post has come close.

Did you find a resolution to this?

And right after I posted the above, I got it working.

Solution was:
Go to Shares → NFS Shares → Config Service → General Options

The “Bind IP Address” was blank. I set it to bind to my normal IP address, and hit save, then the service started up without issue.

Great work and thanks for the update.

Thanks for the result, I swapped from NFS to SMB since I did not find any answer, Nvidia SHIELD takes both.

I just checked the Bind Ip Addresses information: “Select IP addresses to listen to for NFS requests. Leave empty for NFS to listen to all available addresses. Static IPs need to be configured on the interface to appear on the list.”

So I never thought about a fixed IP to solve this issue.

I know you might not want to test this out, but is it possible to set it to “blank” again and see if it works as it is supposed to?

Interesting.

I set it to blank, and restarted the service, and it is running again.

Note, I do have two different IP’s on my NICs. Going to convert them to a LAGG soon, but have not got around to it. I suspect having 2 IPs caused this.

I have a feeling it was one of those “blank isn’t really blank” when you upgrade from Core to SCALE, and you have to sort of “force” the setting by “doing something” with the GUI.

I’ve seen this before in regards to something else with the GUI in TrueNAS.

EDIT: I think it had to do with a dataset property. In an upgrade to Core, the supposed option selected was not actually applied, unless you deselected it and re-selected it again to force it to apply.

1 Like