Is there a way to manually restart the Kubernetes-to-Docker migration script? It failed on the initial run (see error below), and I can’t figure out how to kick off a retry.
[2024/10/06 01:05:09] (DEBUG) app_migrations.migrate():225 - Migration details for 'system-update--2024-10-06_05:01:35' backup on 'fast' pool
[2024/10/06 01:05:09] (DEBUG) app_migrations.migrate():231 - 'jellyfin' app failed to migrate successfully: 'Failed to migrate config: Failed to migrate config: Traceback (most recent call last):\n File "/mnt/.ix-apps/truenas_catalog/trains/community/jellyfin/1.0.23/migrations/migrate_from_kubernetes", line 51, in <module>\n print(yaml.dump(migrate(yaml.safe_load(f.read()))))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/mnt/.ix-apps/truenas_catalog/trains/community/jellyfin/1.0.23/migrations/migrate_from_kubernetes", line 17, in migrate\n "TZ": config["TZ"],\n ~~~~~~^^^^^^\nKeyError: \'TZ\'\n'
Some feedback from someone not using Scale, but using ZFS on Linux elsewhere, regarding the docker storage driver:
The performance difference between the old zfs driver and the overlay2 driver is monstrous. I have not done an apples-to-apples comparison, but deploying the Gitlab Omnibus image went from >5 minutes to <10 seconds (this is just the work that’s done after everything is downloaded and decompressed).
Now, big caveat, this is 2x2 NVMe mirrors versus a single mirror of SATA SSDs, so that’s going to account for some performance difference. But I’m still confident that overlay2 is going to be a much more pleasant experience.
So it doesn’t do funny things with datasets, snapshots and clones… Honestly, it’s probably better this way. The value that ZFS was adding with those features was marginal (the real value was not needing a dedicated partition for docker, or conceptually a zvol). At least zfs list isn’t completely polluted by endless docker datasets. I hate having to go back and always do zfs list -d3 just to get readable output.
I have upgraded my test machine to electric eel RC2 today. Basic functions seem to work as expected. However, I have two questions w.r.t. the new app system:
How can I completely reset an app, such that all settings are deleted? E.g., when I install emby, configure it, then uninstall the app, and reinstall it, it is still configured. I.e., the admin account still exists and configurations I did in the initial install within the app are still there.
Where is this data stored, and how can it be deleted?
Specifically for jellyfin, I realize that “Host network” and the DLNA plugin do not seem to work properly. When I configure jellyfin with host network (which is needed for DLNA due to broadcasting), and then install the DLNA plugin, the server is not found by my media players in the LAN. Doing the same setup with emby, it works as expected.
Really appreciate all the feedback. Those who are experiencing apps migration issues (or any Apps service issues) on RC2, it would be really helpful you can submit a bug (link in the TrueNAS UI) and provide a debugs.
Has anyone experimented with upgrading from Core 13.0 directly to Scale 24.10 yet?
My two TrueNAS systems have been on core for a long time. The addition of docker in 24.10 has sold me on upgrading to scale, though probably not until the release version, rather than RC.
Also, unlike in Cobia/Dragonfish, there doesn’t seem to be a way to look at the logs for the failed container. Although running docker logs is pretty easy enough to do manually.
In case you want your APPs to update automatically, I’d use watchtower for that. In its config you can also only have it update specific apps.
I have been testing the custom apps feature in EE, but I ran into too many issues where FreeNAS does not tell me what actually went wrong. The error message it throws is totally useless and there is no way to access the docker log from the WebGUI.
So I just went with portainer (available as an APP) to manage my custom containers - which works like a charm.