Problem/Justification
TrueNAS is perfectly capable of running headless, all admin can be done via ssh or webUI, yet for the most part, if there’s a GPU present, TrueNAS snatches it and holds on to it. While with some motherboards some people seem to have managed to assign the GPU to a VM anyway, that doesn’t seem to universally work. At the same time, if it works, and something goes wrong e.g. after an OS update, then access to the console is impossible.
Impact
With the increase in the use of LLMs, there are more and more use cases for GPU in VMs, having to buy a (e)GPU when the system already has a largely unused GPU isn’t very attractive both from a cost and reliability POV.
User Story
What I envision: the system uses the GPU during boot and initial setup. Then it pauses for a user determined period, giving the user a chance to use the console to trouble shoot, if needed, and disabling VM startup.
If that time has expired, the OS unbinds the GPU, and then starts the VM, which now can snatch the free GPU for its own use.
This way, any concern about user lockout due to unavailability of the console goes away. Given that reboots should in a production environment a rather rare occurrence, the extra few minutes delay in the VM startup should be negligible compared to the benefit of having the use of the GPU in the VM.
This way both the needs of the TrueNAS OS and rare administrative emergencies, and the optimal use of hardware present in VMs, are met.
Win-Win.