Are the DNS servers and default router supposed to update via DHCP when TrueNAS SCALE is connected to a different network (and rebooted)?
I’m running 24.10 and observing that once the DNS servers and default router are set the first time from DHCP they remain that way and are not updated when the system is connected to a different network (even after reboot) even though the actual IP address does change with DHCP. I only have one interface active.
The documentation seems to suggest that they should refresh but it isn’t very clear and the behavior that I am observing is that they do not. Have I missed something?
I would be interested in knowing if rebooting “fixes” your issue, because it should.
If it does not there is a larger problem. There’s also probably a less big hammer approach than rebooting tho.
Still, you really shouldn’t keep your NAS on DHCP unless you expect your DHCP server to remain stable. Those settings aren’t something that should be touched frequently…
Understood about DHCP. But in my case I’m building a second system to take off-site so it will do ZFS replication via Tailscale. I’m moving between different networks (simulated off-site behind a VPN router) to ensure that replication via Tailscale continues to work before I take it to its destination 600 miles away. It is when changing networks for this that I have noticed this behavior. Very repeatable.
(I know it’s not the DHCP server because other clients behave correctly)
I have done both a simple reboot as well as a complete power off. No difference. Though I don’t really see what the difference from a software perspective should be either way.
I did this again after moving networks and rebooting.
Default router did update correctly via DHCP.
DNS servers were a partial success.
The new one given by DHCP got added to the list but the old one remained.
Clearing out the old ones causes them all to come back and claim to be via DHCP but old ones that I put in manually return too (my DHCP only provides one DNS server address).
Clearing them out again repeatedly causes the same, they come back.
I had to manually delete the old ones out of /etc/resolv.conf.
I was just hit by this bug too, on my just finished Backup NAS.
Rebooting did not solve the issue.
I have just finished setting up my MJ11-EC1 backup NAS
Installed w. Dragonfish-24.04.2.3 , and upgraded to Dragonfish-24.04.2.5.
The system was installed on another subnet, and was working fine.
Now I moved it to the “permanent” subnet. And when booting on the new subnet, the system was “unresponsible”.
I could see that the NIC did a DHCP request, and got DHCP information handed over for the new subnet. The ip address from the new subnet was set/used.
Having a BMC (IPMI) on the MJ11 board, i opened the console , and grabbed a linux screen.
It was easy to verify that there was no default gateway set in the kernel routing table, making the system unresponding on anything but the current subnet.
My magement PC isn’t on the same subnet as the NAS, so that’s why the system was unresponsible to me.
The fix was “easy” for me …
Via the BMC console i selected option 2 : Configure Network Settings.
And immediately i saw that the current :
ipv4gateway : Reflected the OLD install subnet DHCP Info
nameserver1 : Reflected the OLD install subnet DHCP Info
I manually changed those to the correct values for the new subnet, and all as fine. The modified settings survived a shutdown/reboot too.
Weather this is a Bug or a “Feature” i have no idea … I lean towards BUG.
Bug description :
On Dragonfish-24.04.2.5 - Using DHCP for network settings:
The DHCP def-gw and dns servers aren’t updated in the linux (scale) configs , if moving the unit to another subnet.
Guess: Dragonfish DHCP connect script doesn’t update the linux config, on “every” boot. Just on install.
Edit: Made a bug report - You’ve created “NAS-133114” issue
I just got noticed that IX has closed my ticket, with the below reason:
Hello. It is difficult for us to try and reproduce issues on older versions. Please try to reproduce on latest 24.10 and report back if the problem persists. Thank you for understanding.
So it seems like DF is now in a kind of “unsupported” state.