QNAP TS-877 Truenas Journal

before i could go winscp sftp via root to make changes but now i can’t.

so instead i just login to truenas via root, then in shell login to jlmkr shell docker

from there i cd etc

*can’t see the folders using but once you are in etc you can ls.

once i’m in the right folder i can do a nano [file-name] to edit then save. of course u have to download nano first
sudo apt update

sudo apt install nano

earlier notice no connectivity so i went to truenas > network > global configuration

edit the nameserver to include your router lan ip e.g. 192.168.1.1

maybe you also want to include say 1.1.1.1 and 1.0.0.1 for cloudflare dns as fallback, up to you

anyway this was what fixed it. if you can’t pull apt updates or whatever since it complains no connetivity, check whether you forgot to edit here.

hm the 2 issues still there

  1. docker 192.168.0.24:5001 still doesnt work. tested with other dockers same story

  2. when create user, then delete the user in jlmkr shell, it spams some error about crontab which fails to execute

now i’m thinking for a complete clean install, i have to wipe the boot drive as well apparently :sweat:

i would have been fine if docker worked but it didn’t so now i’m at a loss

I think the issue is in your jail, not your install.

What is your network config for the sandbox?

Did you change anything in /etc/systemd/network?

BTW, your compose/stacks directory shouldn’t need to be recreated, as the files should be quite stable and able to run on almost any docker machine you configure… sortof the point

i modified

80-container-host0.network file located in /etc/systemd/network .

[Network]
DHCP=false
Address=192.168.0.12/24
Gateway=192.168.0.1
LinkLocalAddressing=no
LLDP=yes
EmitLLDP=customer-bridge

the entry so yeah i didn’t miss that part to add the static ip.

i had a weird blip just now. dockge loaded for a moment, when i was changing the truenas network setting, removing 192.168.0.24 from br1

i even saw the url showing http://192.168.0.24:5001

but then it no longer worked after. weird

figured it out

it was a BAD config for the docker template apparently (not the one from jailmaker’s website. it was the custom one i edited and was using to install it from)

i sus it so i tried the default, made the fewest changes that were non issue e.g. bind mounts.

then ran it, change the network to static

checked docker is up,

installed dockge using my own compose

1 Like

but really what snuck into the config while i didn’t notice? because that got me good x-x; maybe i had ran a command or overwrite without noticing. no idea.

if i had to make a guess, might have been somewhere close to the

--network-macvlan=eno1 --resolv-conf=bind-host

which i edited for --network-macvlan=br1 --resolv-conf=bind-host

i think i may have clipped the ending of host or something which broke the network from working and didn’t notice. no idea x-x;

so when editing and backspace…careful :smiling_face_with_tear:

well the other issue remains unsolved

root@docker:~# deluser test
deluser: `/usr/bin/crontab' not executed. Skipping crontab removal. Package `cron' required.
Removing user `test' ...
Done.
root@docker:~#

might have botched up cron in truenas or something cauz it’s not going away x-x; but i don’t see it crashing nas either. guess i’ll ignore it cauz i have no solution for it

once dockge was installed, it was just a matter of clicking start to deploy all my docker containers detected by dockge (because their compose and .env are detected in the stacks folder)

i’m deploying nginx proxy manager last. i’m least confident that will work out the gate x-x;

The containers that ran into issues were due to non existing user based on guid gpid. so have to fixed that, easy fix. most just started up just fine.

was a bit unstable so i did a full factory reinstall.

popped in usb flash stick with latest dragonfish, reflash over the boot drive.

when recovering, DO NOT create new pool. Instead, select the IMPORT pool. assuming u want to recover ur pool without deleting it.

Tried it a few times seems to recover correctly and fast too

Do you think you edited TrueNAS’s crontab?

As opposed to one in the jail?

not sure but that would explain a lot. i’m halfway to recovery. i’m taking things slow so i don’t make mistakes

datasets all there, all i’m doing right now is setting up jailmaker, the networking, then after that docker container deploy. then gonna call it a day :smiling_face_with_tear:

got to postpone the jellyfin project another day sorry >_<:

hm :pray:

root@truenas[~]# cd /mnt/xxxxxxxx/jailmaker
curl --location --remote-name https://raw.githubusercontent.com/Jip-Hop/jailmaker/main/jlmkr.py
chmod +x jlmkr.py
./jlmkr.py install
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 70543  100 70543    0     0   132k      0 --:--:-- --:--:-- --:--:--  132k
systemd-nspawn is already installed.
Cannot create symlink because /usr/local/sbin/jlmkr is on a readonly filesystem.
Created bash alias jlmkr.
Created zsh alias jlmkr.
Please source /root/.zshrc manually for the jlmkr alias to become effective immediately.
Done installing jailmaker.
root@truenas[/mnt/xxxxxx/jailmaker]#

That’s all normal.

Nspawn is included in dragon fish. Root fs is read only in dragon fish. No shortcut for you.
jlmkr aliases are added for bash and zsh, but for them to work you need to relogin to the shell, or you could source the zshrc, since that’s what you’re using.

1 Like

ok it worked.

install jailmaker, install docker script, modify the networking, added nvidia toolkit. installed nano, curl, gpg etc…

updated apt and updated linux stuff

docker compose up -d for dockge

and like your video it immediately worked right after. no drama

noticed this still happens

root@docker:/mnt/docker/compose/dockge# useradd tester
root@docker:/mnt/docker/compose/dockge# deluser tester
deluser: `/usr/bin/crontab' not executed. Skipping crontab removal. Package `cron' required.
Removing user `tester' ...
Done.
root@docker:/mnt/docker/compose/dockge#

i’ve done as clean an install as possible. i even wiped the boot drive. only thing i didn’t did was wipe the pools. but i did wipe the jailmaker dataset so there really wasn’t anything left that could have been left over from previous setup.

but it seems to be benign and only notice it when i run that command afaik. will simply ignore it :sweat_smile:

nifty command

enter jlmkr shell docker

nano /etc/group

shows all the group

gpid

when i created a docker user i couldn’t create a docker group because it said it already existed. i wanted to set 1000:1000 which is default for linuxserver image releases but i coulnd’t.

and the docker group wasn’t listed in the truenas group section.

so had to find out this way from shell

some good info about what to do with the docker user in truenas

i think i may have botched the acls/permissions for the docker dataset. no i did not touch the jailmaker dataset.

but now i am getting permission errors on dockge containers being deployed.

what is the recommended setting for the acls in docker dataset (not the jailmaker which is the generic default and you don’t touch once you first cr8)

is it generic?

i was thinking if changing the permissions doesn’t work, maybe i can stop docker first, make new dataset, copy data over, then delete old, remake, copy back. then turn on docker, would that work?

ok so that worked.

those errors went away.

but i noticed

when deploying using user 0:0 i get a
FPM initialization failed [pool www] please specify user and group other than root

But when i changed to the docker user i created in truenas user, it worked

services:
  librespeed:
    image: lscr.io/linuxserver/librespeed:latest
    container_name: librespeed
    environment:
      - PUID=1000
      - PGID=996
      - TZ=newyork
      - PASSWORD=PASSWORD
      - CUSTOM_RESULTS=false #optional
      - DB_TYPE=sqlite #optional
      - DB_NAME=DB_NAME #optional
      - DB_HOSTNAME=DB_HOSTNAME #optional
      - DB_USERNAME=DB_USERNAME #optional
      - DB_PASSWORD=DB_PASSWORD #optional
      - DB_PORT=DB_PORT #optional
      - IPINFO_APIKEY=ACCESS_TOKEN #optional
    volumes:
      - /mnt/docker/data/librespeed/appdata/config:/config
    ports:
      - 3480:80
    restart: unless-stopped
networks: {}

i thought generic dataset would be safest for the docker location. doesn’t have smb access.

got most of the docker containers working.

even one that didn’t previously work is now working correctly e.g. i couldn’t download unifi backup config, but now it works. guess it was previous a permission issue going crazy.

question is if nginx proxy manager works :pray:

*update

it worked :partying_face:

1 Like