Yes, host paths are expected to transfer over cleanly. Lot of us internally use them as well and would be cranky if that doesn’t work
Awesome!
Now I just need to figure out the smoothest way to migrate Nextcloud and Jellyfin…
Heyo, first: thanks!
Secondly: Will it be possible to point the Docker backend to an encrypted dataset for image storage and such? As far as I understood that was/is not a thing with the k3s solution as this always used the ix-applications dataset for which there is no button to enable encryption.
If you choose the host path storage option you can select a pre-existing dataset in your pool.
We have not tested or validated using encrypted datasets with apps so we cannot advise you on whether this is a good idea, but there is nothing in SCALE app configurations that prevent you from selecting a path for a dataset with encryption.
Maybe I’m misunderstanding this - is that new in this version? When I go to Apps in my 23.10.2 installation I can only set a pool. My understanding is that in this pool the ix-applications dataset where it stores all app related data is created (I assumed this includes the container images themselves?). But I do not see a way to use my own encrypted dataset inside of that pool instead of ix-applications or to put a passphrase on ix-applications.
Here is the article explaining the Cobia 23.10 App screens and widgets. You can also visit the tutorials sections for details on implementing some of the apps (we have not documented all available apps).
Setting storage to a host path is not new. If you click in the storage field where it says ixVolume, you can select the host path option. BUT… create your dataset(s) first, then you can enter or browse to the path to the dataset.
Here is an example which is in the Prometheus app tutorial but it shows you how you can set existing datasets.
Ah, I knew that kind of host path. I think we are talking past each other (or I have a serious misconception about how any of this works). I think these settings are only for the storage the app uses / is given access to itself, not for storing e.g. the image used to start the app. So if I configure host path based storage for an app that basically means passing the hosts storage through to the app at the specified mountpoints.
That’s not what I’m talking about though - Unfortunately I’m pretty unfamiliar with k3s in this respect, but I am fairly familiar with docker and since something along those lines seems to be the goal anyway I’ll use that as the example: dockerd has the --data-root
parameter which is where the daemon stores the images among other things. With how the app setup goes currently I’d assume the ix-applications dataset is being used for something akin to dockerds data-root and unless you intend to have a separate daemon per app I don’t think there would be a way to set this per app (unless that’s different for k3s) - this would have to be server wide app config.
Now if I build an image on my computer and I throw it over to my SCALE server and load it into the daemon I may not want the docker daemon running there to save the image to the unencrypted ix-applications dataset (or the unencrypted system pool for that matter, which would be the docker default I think), I would much prefer to either set my own encrypted dataset for this or get the option to encrypt ix-applications like I can do with my own datasets.
I’m curious as to the why of this, why are your images so confidential? Are they your own code perhaps? Anything that comes from docker hub and other places like it is just a standard image that anyone can find. So, I am presuming you have some other reason it needs to be encrypted.
All actual data / configs is (or can be) stored outside of the image of course, perhaps on an encrypted dataset. So, more curious as to why you want to do this. If you don’t mind, not saying you’re wrong just saying I wonder why.
As I am starting to think trough move from TrueCharts apps (majority of my apps are TC) to Docker Compose I am definitely going to miss simplicity of integrated features like built in VPN - I am using it for firefox app and qBittorrent
I’m in the middle of resarching on how to do it, i’ve found a docker-compose file for gluetun, the same vpn addon truecharts uses. And as far is i understand it, you just have to add the container that should use the vpn to the gluetun network.
See GitHub - qdm12/gluetun: VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
Thank you. this looks very promising. I still have some time since I have to upgrade my Nvidia GPU (going from current GTX 750 to probably RTX 3050 ) even to get to Dragonfish before considering Electric Eel.
…which is a big part of why I’m planning on using their migration process rather than moving to whatever iX is doing with apps.
Ok, I’m catching up from behind here and grokking this.
I’m not against this change as it will be lighter on the NAS. It’s sensible.
It will be a rough transition for me as I have one application need on Truenas, and it was sparked by the deprecation of the bundled S3 service on the box. I run minio, and I want it exposed on port 443 on a dedicated dns name. In my head this is fairly simple.
To get this working in a automated fashion and not conflicting with the TrueNAS Scale UI, I had to go with TrueCharts. I needed MetalLB, Traefik, Cert Manager, and recently had to add OpenEBS due to Dragonfish. I added external-dns for updating my PiHole cluster automatically which is nice, but not necessary.
So I need to know - how are people achieving a simple thing like exposing an apps on 443 on a common load balancer with native TrueNAS charts? I want to move my setup to them so I am prepped for this migration when it comes. I can handle the sandbox setup… but i don’t want to. I have an external cluster that connects to this via democratic-csi. The one thing I don’t want on that cluster is s3, since Minio is not able to be run in HA without enterprise, and I want it to live on the storage system with the rest of the storage services.
So far, I can find Minio, and Nginx proxy manager will cover my cert needs and proxy needs. However Nginx can’t announce an L2 IP like metal LB, so what is the advice to complete this setup with native TrueNAS apps? Ideally one or two more apps to allow for announcing an L2 IP advertisment and exposing NGINX on that LB to avoid conflict with the Truenas interface and for DNS to be pointed to?
Fair, I agree that for images that are public anyway this does not make an insane amount of sense - but not having this encrypted keeps what images have been used/are in use (and a bit more probably) visible. I think this is an unnecessary leak of info already but I understand if this is seen as overkill in this case.
But as you already guessed (and I kind of alluded to) I would mostly be running my own software. And I would prefer not having to worry about giving anyone with physical access to the hardware access to what I bundle into the images as long as I’m pushing them to and running them on my own physical server
I’m not a huge fan of not getting the option to encrypt my boot/os pool/dataset/whatever either - Here I would assume that your viewpoint would be that it only holds the OS which is essentially public anyway? I’m thinking of temporary files, shell history and god knows what else is actually saved there. I just prefer having my machines be useless without my password (until re-imaging that is, of course) ^^ - gives me peace of mind
Generally, out of principle, given the chance I encrypt everything (give or take a couple of thumbdrives that contain ISOs, memtest and such - because here I actually don’t see a point), all of my machines use FDE (and preferably without SEDs (because of lack of experience and general distrust in the concept) or the password being saved on some kind of hardware (because I think that kind of defeats the purpose)). Having individual segments of my setups not encrypted feels a bit icky to me and I feel more comfortable just having everything encrypted indiscriminately
Also to me this is looks like allowing just another dataset to be encrypted - so to a certain degree “why not?”
I hope that clears up how I think&feel about this and why I asked that question ^^
I am not against encrypting the app pool, I started Truenas that way when I first set it up and ran into some sort of issue, don’t recall what and ended up decrypted. But how do you think someone with physical access can’t access what’s on there? They are decrypted as soon as you boot the machine assuming you could encrypt the app pool. Unless you mean only physical access, maybe a data center, not any sort of software access via ssh, smb, etc. Even then, if your machine can boot without you entering a passphrase, then, they can get in.
Boot pool is a whole other case, not as trivial as you can’t boot an encrypted filesystem without some stuff being not encrypted.
That’s why I use Self Encrypting Drives and created GitHub - Jip-Hop/sedunlocksrv-pba: Conveniently unlock your Self Encrypting Drive on startup (via HTTPS) without the need to attach monitor and keyboard to conveniently unlock and boot the NAS. Hardware encryption is at a lower level, the TrueNAS OS isn’t aware of it or involved. This provides encryption at rest if someone were to break in and unplug and take the NAS at least I know the data isn’t accessible.
That would only be the case with how pool level encryption is done here. That’s why I don’t rely on that (thats one half of what I was referring to with “password being saved on some kind of hardware” because in the end that’s what’s happening here), as I do not consider that proper encryption. What I do is have passphrase based encryption on every dataset. This way when I boot the machine all of the datasets (well currently except for ix-applications and anything on the boot pool) are encrypted until decrypted with the passphrase (until next poweroff or manual encryption). I have not been asking if I can encrypt the pool the ix-applications dataset is on - that I can do already methinks (with the aforementioned limits that keep me from relying solely on that). I am asking if I will have the ability to slap passphrase encryption on that dataset like I can do with all of my other datasets so that I can have proper encryption for this like I do with my other datasets ^^
Very true, but using something like LUKS with your run of the mill distro very heavily limits what is accessible without the passphrase - that really would be a nice to have. And booting from ZFS with encryption seems to very much be a thing, looking at ZFSBootMenu and the OpenZFS docs regarding “distro Root on ZFS”.
To be clear: Unlike with the dataset encryption part I was not expecting this to work
Yeah, also fair, but I would prefer a software based solution. Though you are of course correct, this is a way of doing it where the OS doesn’t necessarily have to know about / be involved in the encryption.
If you are still on the S3 service in TrueNAS, CORE or SCALE early releases, this is not just about moving to the app alone. That S3 service was using the MinIO-deprecated deployments for their Gateway and Filesystem Mode services. You must address this before you migrate the S3 service in TrueNAS.
https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
If migrating in TrueNAS we have procedures documented to help you deal with transitioning from S3 to an app.
@bella Thanks for your response. However have kept up with releases and am on Dragonfish, and when the s3 service was deprecated I moved to the TrueCharts Minio.
What I am looking for is how I get away from TrueCharts and run minio using the official apps. I can see the minio app is in the apps repo for TrueNAS, but the missing piece seems to be a way to announce an L2 IP that I can put in front of nginx proxy manager.