Reading through the nvidia container runtime docs, it is indeed the issue.
So I will have to sit back and wait.
The other issue I’ve been posting about is a bug with the gpu attach to the instances and is a separate issue.
Reading through the nvidia container runtime docs, it is indeed the issue.
So I will have to sit back and wait.
The other issue I’ve been posting about is a bug with the gpu attach to the instances and is a separate issue.
The bug reports in regards to additional drives, which can already be removed via GUI. Only the wording “delete” is a bit off.
I am talking about removing the Root Disk of the VM, which currently cannot be removed via GUI, and is just there when you import VMs coming from 24.10 or other importable source files.
@Captain_Morgan i couldn’t find a jira issue for the upstream nvidia-toolkit bug.
Will the toolkit be updated next month when 25.04.2 gets released or should we file a bug report? (again as far as i’ve seen there was no report in jira)
If in doubt, file a bug… its better to have duplicates and people who can help troubleshoot.
Edit: On 2nd thoughts… it’s not a bug yet and I see no evidence that it will be updated in 25.04.2. We do not have a mechanism for documenting and tracking future possible bugs
Based on the last TrueNAS Tech Talk video, Incus VMs will no longer be considered experimental as of the October release (IIRC), but containers still will be. Although the hosts said to keep using/making Incus containers, what is the feeling as to reliability? I’m still running TrueNAS Core and considering moving over to Scale (aging FreeBSD has blocked upgrades for several applications), but I really don’t want to go to the work of reconfiguring my jails as containers and then having to redo everything a year or two from now.
Its a great question and worth a broader discussion. Until we remove the LXC experimental label, I’d consider the recommended order to be:
For each user, it useful to start with a list of Apps and then map them appropriately.
Use host path datasets and its possible to switch.
Thank you, this is helpful. I’ll admit that I was hesitant to even consider Apps after being burned by plugins in FreeNAS.
Docker/containers are much better at isolating Apps from system/OS.
Not quite as powerful as jails, but very robust.
Thanks again. I will probably give Apps a shot. It looks like they cover most of what I need. The remainder I can use VMs, most likely.
Will they be it once ?
LXC is close… it really depends on what you use.
that is simple. I want to reach full seggragatio between sets of apps, some apps are private, some are shared. They live in different networks.
I want to avoid, that if an app is compromissed all other apps and the host are at risk.
My layout it explained in more details over here: Network Configuration for New Instances in TrueNAS SCALE 25.04 Fangtooth - #15 by PackElend
Does Docker have a weakness you are concerned about?
VMs and LXC give you more resource and performance isolation.
I´m not concerned about docker.
As far as I have learned, you cannot put Apps on different network, so these kind of workaround exits: Inter-app communication in 24.10 Electric Eel
I could use Apps for my private things and put puplic ones in LXC or VMs.
Morover, I don´t want to reserve IPs via Alias, do micro management, I want to use DHCP as much as possible.
At the end is mainly a proxy like treafik and comminucation to apps is managed by docker lables through the proxy.
The only dilmena is, when I want to share the admin role, how to limit access to my Apps.
The only thing that confuses me is when I create a MACVLAN interface from an existing BRIDGE or VLAN interface on TN.
As I have different pools of apps for different network segments, I have to use VLAN interfaces.
As soon as I assign an IP address to such an interface, I open the door to the host.
I find it hard to accept that this simple setting plays such an important role in network isolation.
That is now available… not sure it meets all your needs.
Dockge provides some additional control.
That’s true; I forgot that.
You still need to assign IP aliases though (OK, it’s a convenience thing — beggars can’t be choosers).
But won’t inter-communication not work? So traffic won’t auto-discover other containers?
(I need few spare days, to test that, maybe later this month)
Amazing work @dasunsrule32.
I’ve migrated most of my jailmaker jails over using your guide. Unfortunately my Truenas install didn’t renew it’s self-signed certs, became difficult to access and I ended up on 25.04.1 and rolling back to 25.04.0 causes all sorts of issues and I loose all my config changes (even trying to restore a working config saved from 25.04.1).
As I can’t rollback… is there any update on the nvidia incus containers for 25.04.1?
I filed a bug report but it was closed as it will be fixed in 25.10
Not at this moment, it should be fixed in 25.04.2
, as it’s a routine package they update most releases. We just caught that hanger on this build.
You can disable nvidia on those containers for the time being until the end of July.
As @LarsR said, they’ve marked it fixed in 25.10
as well.
There isn’t going to be an easy way to rollback to 25.04.0
with your Incus containers if they didn’t already exist. I’m hoping IX makes that more resilient moving forward by allowing moving the database to the pool to make it persistent across builds.
Got it. Thanks for the reply.
Until an update is released can I suggest you add a note on your OP?
Possibly change this bit:
And add a note under “Known Issues” and/or “Do GPU’s work?”