Hello,
I am little confused about the future of Linux Containers in TrueNAS SCALE.
On the main page of SCALE there is this statement:
TrueNAS® SCALE is an Open Source Infrastructure solution. In addition to powerful scale-out storage capabilities, SCALE adds Linux Containers and VMs (KVM) so your organization can run workloads closer to data.
So it looks like Linux Containers should be planned for some future release?
But currently there is only systemd-nspawn available through community script Jailmaker. Dont get me wrong, it’s awesome that someone figured out temporary solution how to have containers before LXC is implemented. But this is what worries me.
There is a saying: “There is nothing more permanent than a temporary solution.”
So while Linux Containers were promised and that means LXC, I am little worried that in the end systemd-nspawn will be chosen as good enough solution.
From what I gathered when trying to compare LXC and nspawn. LXC is more popular, has better documentation, has more features, is more flexible and is even somewhat more secure if configured correctly. From what I have read nspawn is good for some easy development sandboxing but for production LXC is the obvious choice.
So systemd-nspawn is kinda chroot on steroids as described on Arch Wiki.
While LXC are true Linux Containers.
No, it doesn’t[1], and this fundamental misunderstanding really kind of destroys the rest of your question. LXC was never promised and I don’t believe it’s intended to come. The containers are Docker containers, which SCALE has had since its first release.
I’m not going to engage this question one way or the other; the point is that iX never intended it to mean LXC ↩︎
And multiple other sources. It’s easy just to Google “Linux Containers” and it’s obvious what is the general meaning. But it’s still prone to misunderstanding when talking about multiple types of containers, that’s true.
But if iX called it Linux Containers but they actually didnt mean Linux Containers (LXC), that’s possible I guess. Either way, no need to get stuck on naming dispute. But one more reason why I am confused.
Another source that seemed to indicate iX meant to include LXC is this accepted suggestion.
So it looks like they did intend to include LXC. That’s why I am asking if the plans changed.
Yea, this is mostly a semantics situation. By using Linux Containers we meant to distinguish between BSD and Linux, not LXC and Docker. LXC is not currently on the roadmap, right now the immediate effort is focused on native docker support coming into the product this fall.
“Linux Containers” with a capital ‘C’ is possibly LXC. (And I’ll keep wondering what’s LXD… Probably another case of the NIH syndrome.)
Most people would understand “Linux containers” as “these images you run with Docker”—or simply as “docker”.
Interpretations based on capitalisations should be taken with a high amount of salt… [Insert mandatory health warning here]
Jailmaker is a successful community contribution, which users now rely on to run docker, and possibly k3s/k8s or other services. Moving sandboxes to LXC would likely involve additional development work, including some migration tools from systemd-nspawn to LXC.
Could you please layout the actual technical benefits, as well as drawbacks, of one solution over the other?
From what I understand LXD/Incus is layer above LXC.
LXC is more low-level while LXD/Incus acts as higher level manager for easier use.
So they are not competing solutions but complementary products from the same devs. At least I think.
About LXC and nspawn. I googled for days to find differences. But it’s not easy because almost nobody uses both to be able to compare.
I mainly based my preference of LXC on the fact that it’s more popular and has more documentation. So if something bad happened it would be easier to find solution for LXC.
nspawn was released in cca 2014. LXC was released in 2008.
nspawn has 1504 commits. LXC has 11765 commits.
While comparing like this is far from ideal. It appears to me that LXC is older and more developed than nspawn.
For example Proxmox also uses LXC.
One of downsides of nspawn that was mentioned during my search was that container guest needs to run systemd too to properly integrate. LXC doesn’t have this downside.
And from what I have read LXC hase somewhat better unprivileged containers. On the other hand nspawn containers always start as root and then drop privileges.
Either way, I hope someone more experienced than me can compare them in future.
The solution can be running Proxmox in Jailmaker (nspawn) and use it for both VMs and LXC containers. I do it like that, works like a charm, so I do have all the advantages of TrueNAS and Proxmox (almost) without overheads.
Er… what’s the point of running a hypervisor in a virtual enviroment?
Virtualisation is complicated enough without beginning with nested virtualisation, isn’t it?
I am running Proxmox in a nspawn container with full system privileges, it is not virtualisation, but isolation with close to zero overheads. So hypervisor is running on bare metal. The purpose is pretty clear - to get “normal” virtualisation and containerisation manager with the full clustering support, but without dedicating servers for that. I have a few servers in my homelab, but not ready to have separate storage, application, etc. ones. Definitely, it can’t be an enterprise solution.
Proxmox in jailmaker? I’d like to know how. Tried by creating a jail based on a modified LXD template just leaving out the LXD package install. Installing proxmox packages manually post jail creation as per their wiki fails as ifupdown2 refuses to install either using host or bridged networking in jail.
@Stux I did get proxmox to run in jail after starting with a basic debian jail and reconfiguring the jail’s networking to match how proxmox works before attempting to install proxmox in the jail. A brief test indicated what devices, programs and systemd services were needed by proxmox and those services that should/could be disabled and how you might edit the jail config to get both container and vm creation to work in the proxomox jail.
I’d be very wary of binding both /dev/zfs and /dev/zvol (used by proxmox whem creating VMs) to the jail which exposes all pools to the jail. As an experiment, configuring zfs storage in the proxmox jail works OK for containers, but VM creation on zvols does not work with "“timeout: no zvol device link” errors. There is no zvol_wait errror, so this could all be down to how proxmox uses udev on bare metal while it’s all read-only in a jail, or simply because the proxmox kernel is not used when running in a jail?
I’ve only done basic tests, so without further feedback from @priezz with more detail about their config, I can’t say how useful and reliable proxmox in jail is when limited to “directory” storage.
Just to add that when testing the idea of proxmox in a jail, starting a container in debug mode showed access to bpf, loop devices, /sys/fs/fuse/connections and a working lxcfs.services were all required. The later failed with an unmet condition “ConditionVirtualization=!container”, editing this to “ConditionVirtualization=” allows the lxcfs.serivce to start but with what consequences?
How best to give a jail access to the host a loop device, should you just pick a host loop device at random to bind to the jail?
Proxmox does not use systmed-networkd or systemd-resolved for networking, the iproute2 & ifupdown2 packages are installed and the network is configured via /etc/network/interfaces, /etc/resolv.conf and /etc/hosts. hostname -i ( --ip-address) must return the ip of proxmox. On bare metal, the simplest proxmox network would have one interface added to a bridge. I elected to give the proxmox jail a separate host nic via the “–network-interface=” option, adding this to a bridge configured with a static ip in the jail.