Installing NextCloud on TrueNAS Core 13.3

I need to install NextCloud on my TrueNAS Core 13.3-U6.3 system. I know the plugin approach is deprecated, so I tried following the procedure outlined in How to install Nextcloud on FreeNAS in an iocage jail with hardened security - Samuel Dowling. Everything was fine until I got to this command:

iocage create -n nextcloud -r 11.3-RELEASE ip4_addr="vnet0|192.168.0.10/24" defaultrouter="192.168.0.1" vnet="on" allow_raw_sockets="1" boot="on"

Obviously, I changed 11.3-RELEASE to, first, 13.0-RELEASE. That failed, saying “13.0-RELEASE not found”. Then did

root@file01:~ # iocage fetch
[0] 13.3-RELEASE
[1] 13.4-RELEASE
[2] 14.0-RELEASE
[3] 14.1-RELEASE
[4] 14.2-RELEASE

so I tried using 13.3-RELEASE. Same result.

So, is there a way to get past this step in the installation procedure? How do I find a FreeBSD release version that works? Can I download 13.3-RELEASE using “iocage fetch” and then somehow tell the “iocage create” command where to find that downloaded version?

Or do I have to convert to TrueNAS Scale in order to get NextCloud?

Thx.

Thx. I looked at that before I posted, but I don’t meet the prerequisite conditions.

Surely there is a way to tell iocage where to find FreeBSD 13.x-RELEASE.

You mean the let’s encrypt part? Is not necessary!
Just use NO_CERT in the config, or better solution for your use case

Mmm don’t know honestly, Is eol

You must use 13.4-RELEASE for the jail.

1 Like

Same resultt:

root@<machine>:~ # iocage create -n nextcloud -r 13.4-RELEASE ip4_addr="vnet0|192.168..." defaultrouter="192.168..." vnet="on" allow_raw_sockets="1" boot="on"
RELEASE: 13.4-RELEASE not found!

Did you fetch 13.4-RELEASE?

1 Like

No, didn’t know that was a required step. In any case, it doesn’t seem to work:

root@<machine>:~ # iocage fetch
[0] 13.3-RELEASE
[1] 13.4-RELEASE
[2] 14.1-RELEASE
[3] 14.2-RELEASE

Type the number of the desired RELEASE
Press [Enter] to fetch the default selection: (13.1-RELEASE)
Type EXIT to quit: 1
Fetching: 13.4-RELEASE

Extracting: base.txz...
Traceback (most recent call last):
  File "/usr/local/bin/iocage", line 10, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/iocage_cli/fetch.py", line 181, in cli
    ioc.IOCage().fetch(**kwargs)
  File "/usr/local/lib/python3.9/site-packages/iocage_lib/iocage.py", line 1105, in fetch
    ioc_fetch.IOCFetch(
  File "/usr/local/lib/python3.9/site-packages/iocage_lib/ioc_fetch.py", line 215, in fetch_release
    rel = self.fetch_http_release(eol, _list=_list)
  File "/usr/local/lib/python3.9/site-packages/iocage_lib/ioc_fetch.py", line 446, in fetch_http_release
    missing_files = self.__fetch_check__(self.files)
  File "/usr/local/lib/python3.9/site-packages/iocage_lib/ioc_fetch.py", line 644, in __fetch_check__
    self.fetch_extract(f)
  File "/usr/local/lib/python3.9/site-packages/iocage_lib/ioc_fetch.py", line 821, in fetch_extract
    f.extractall(dest, members=member, filter='tar')
TypeError: extractall() got an unexpected keyword argument 'filter'
root@<machine>:~ # iocage create -n nextcloud -r 13.4-RELEASE ip4_addr="vnet0|192.168... ...
RELEASE: 13.4-RELEASE not found!

Am I missing something else?

Thx for the help.

Just for be sure: you are on 13.3, and not not on 13.0-U6.3 right?

No, 13.0-U6.3. Checking for updates shows none available for Core.

Now Is clear, you can’t run jails on 13.0.
You had to move to 13.3 (with a manual update) or sidegrade to Scale.
IMHO if you don’t have other service running that you can’t fast migrate to docker, no geli encription or particular SMB setting… go directly for Scale. Sad that Core Is slowly dieng, 13.3 Is still affected by some bugs on scrubs, but nothing that impact stability afaik

2 Likes

To be clear, I have been running a jail on 13.0, namely NextCloud. However, I screwed up during a hardware upgrade and corrupted that jail. I can’t recover it, so don’t suggest trying that. But since I did have a working NextCloud jail before, there has to be some way to get it working again. If there isn’t any way to get past this current hitch, I’ll try using the scripted install that was suggested earlier.

dont get that but… ok

yep, upgrading your system. There are a lot of discussion about that, with all pros and cons.
If you don’t make this choice, neither the script provided will work, you just will waste time

Ok, so I bit the bullet and tried upgrading to Scale 24.04.2.5. I saved the config file, with secret key, first. All seemed to go fine, but my pool is locked. I uploaded the config file and rebooted, but still locked. Any chance I can save this system?

Why not EEL? The choice for run apps are Core 13.3 or Scale 24.10.
Btw what u mean with locked? Do you have encription, did you follow the migration path guide? There are some thing to check before sidegrade from Core. With the config file you have, until you don’t upgrade your pool, you should be able to roll back to Core.

I appreciate the help. “Locked” means that when I try to view pools my one-and-only pool just said “Locked”. However, it seems that after uploading my stored config file, it looks a bit different. Now under “Storage” I see

  • 2 unassigned disks (I just have 2 disks plus the o/s disk - these are all ssd’s)
  • For the pool, “Topology” has an orange exclamation mark
  • Ditto for VDEVs - says “offline VDEVs”
  • Pool status says “offline”

There’s a button for “Upgrade” (also Export/Disconnect and Expand).

I tried to follow the migration path, but maybe I missed something. Should I want to revert back to 13.0 Core, do I do that by a clean install from iso?

If you don’t have time to debug, roll back (clean install + config upload), and keep that for when you have time/fresh backup/ ecc.

Despite, i think this thread can have good int for start debugging what’s going on, in case start sharing the output of commands.

Thx. Can you suggest commands I could post the output from?

zpool status
blkid
zpool import

Go watch the troubleshooting on that thread

Seems like the solution is to use the -f option on zpool import, but it doesn’t look like it was (completely?) successful:

root@file01[~]# zpool status
  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:01 with 0 errors on Wed Dec 11 03:46:01 2024
config:

        NAME                                                STATE     READ WRITE CKSUM
        freenas-boot                                        ONLINE       0     0     0
          ata-INTEL_SSDSC2KW128G8_PHLA90950286128BGN-part2  ONLINE       0     0     0

errors: No known data errors
root@file01[~]# blkid
/dev/sdb2: LABEL="freenas-boot" UUID="9664600521250228929" UUID_SUB="10851798422660731551" BLOCK_SIZE="512" TYPE="zfs_member" PARTUUID="97438f21-ada0-11e9-9906-e06995bb4455"
/dev/sdc2: LABEL="netstore" UUID="1578279653714263933" UUID_SUB="2140628803152397115" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="68d06d3e-b495-11ef-9df5-e06995bb4455"
/dev/sda2: LABEL="netstore" UUID="1578279653714263933" UUID_SUB="5971810342591292039" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="c50461ff-b3f0-11ef-a59f-e06995bb4455"
/dev/sdb1: PARTUUID="9742a3f9-ada0-11e9-9906-e06995bb4455"
/dev/sdc1: PARTUUID="68ce06d7-b495-11ef-9df5-e06995bb4455"
/dev/sda1: PARTUUID="c501b83f-b3f0-11ef-a59f-e06995bb4455"
root@file01[~]# zpool import
   pool: netstore
     id: 1578279653714263933
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        netstore    ONLINE
          mirror-0  ONLINE
            sdc2    ONLINE
            sda2    ONLINE
root@file01[~]# zpool import -f
   pool: netstore
     id: 1578279653714263933
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        netstore    ONLINE
          mirror-0  ONLINE
            sdc2    ONLINE
            sda2    ONLINE

root@file01[~]# zpool import -f netstore
cannot mount '/netstore': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets