QNAP TS-877 Truenas Journal

So what does this all mean?


tested the following

  1. nuke jailmaker and docker datasets, even the whole truenas boot pool to essentially start from scratch (backups of course). The only thing i recovered were importing the 2 dataset pools to recover my data (RAID IS NOT A BACKUP)

fyi even though this worked, i still had a backup just in case. So don’t do this if you don’t have backup. mistakes can happen.

  1. Successfully restored pool via import. no data loss. even the encrypted dataset could be decrypted and accessed just fine

  2. managed to recover the docker container dataset data from my ts-253D backup via RSYNC (i used qnap hybridbackupsync to recover)

  3. managed to setup jailmaker correctly without deviating (no more crazy testing for me. that is what landed me in trouble in first place. i only did it to help you guys test what worked and what didn’t)

  4. managed to install docker using the jailmaker docker template. I modified it to include my bindings

  5. managed to setup working bridged networking. With static ip set for the docker. truenas is static to a different ip from that, meaning they won’t conflict for ports 80 and 443. Meaning nginx proxy manager will work without issue for my docker containers.

  6. fixing incorrect permissions worked. i merely copy the data from one dataset to another (created for temporary storage), via shell cmdline. After done, delete the original dataset, recreate it using generic, then copy back the data same way, then delete the temp storage once i confirmed data was successfully copied.

  7. deploy dockge using shell. browse to dockge folder where i store a compose file for it then docker compose up -d so with dockge i deploy the rest of the dockers. Most of the containers all correctly deployed, fixed previously not working dockers, and nginx proxy manager the crucial docker recovered perfectly to the point now my http://dashy.domain.duckdns.org works without me having to do anything further

hopefully this case study proves the concept for backup and recovery for truenas, how that would work (this included the recovery for docker containers running on jailmaker as well). It works in case something happens and you need to start from scratch. I did the test and proved my backup and recovery strategy works.

If you didn’t get the meme, it’s the ending scene of steins;gate the 2 characters thought they were doomed, but hyojin kyoma came to rescue them with the spanky new updated time machine from a hopeless situation :rofl:

https://www.youtube.com/watch?v=eOuRw6DUTck

1 Like

short answer: yes :saluting_face:

1 Like

going through my dockers in dockge looking at the cmdline shown, noticed sync things wasn’t working. so i did an edit, redeployed and it works with the previous setup so i didn’t have to start from scratch. it’s still syncing to smartphone just fine.

services:
  syncthing:
    image: lscr.io/linuxserver/syncthing:latest
    container_name: syncthing
    hostname: syncthing #optional
    environment:
      #      - PUID=1000
      #      - PGID=996
      - PUID=0
      - PGID=0
      - TZ=newyork
    volumes:
      - /mnt/docker/data/syncthing/appdata/config:/config
    #      - /mnt/tank/keepass/data1:/data1      
    #      - /mnt/tank//Syncthing/data2:/data2
    ports:
      - 8384:8384
      - 22000:22000/tcp
      - 22000:22000/udp
      - 21027:21027/udp
    restart: unless-stopped
networks: {}

note: don't deploy using root user if possible especially if you are doing remote access. i myself have to harden this when i have time to get around to it. i by no means am encouraging bad practise, hence the note

still couldn’t get authentik to work x-x; no idea why. bit unfortunate but the fallback is native app basic auth. it’s local lan so not too big a deal. just dissapointing. but at least valid https letsncrypt works so i get a proper url rather than using lan ips all the time.

just no reason at this point to try traefik. i’ve used traefik for a while, and it requires doing more things to learn and setup compared the nginx proxy manager.

example, you can modify traefik config yaml for all your apps, or, you can opt instead to use traefik labels which was what i was doing before. It had a nice UI to help with troubleshoot to find where the issue was.

But for a small homelab, nginx proxy manager is more than sufficient and and far easier to setup and handle. even in regards to performance, nginx is still king. Traefik is almost comparable, you can’t go wrong with either really.

I have used all of the listed ones: would exclude synology one as just offers very few features good for learning the basis but as soon you learn the ropes you want to get something better. As far NPM is very simple to use and set up and has good number of feature the project is stuck at 3-4 years ago. Not a lot of improvements and the project v3 is in WIP since years, very often CVEs affects NPM and it takes ages to get those patched. Just to mention the few reasons why I’ve decided to drop it in favour of traefik. Traefik has a bit of steep learning curve but has a really wide community that can help to solve most of the issues you can face trying to set it up, especially if you go for the docker route. I use it in my kubernetes cluster and there are a bit less examples around but still a good number.

i guess security is one reason to go for traefik :thinking:

the thing about traefik labels, looks complicated but once you have that figured out, you can more or less just copy paste to apply to the other docker container compose, and just edit the name to match the docker name.

1 Like

useful video for homelab networking planning and design. where you may consider where your truenas fits into that and how to keep it safe in the bigger picture when looking at your network as a whole and the various interactions with other devices connected to that network

Generic is basically “Unix”, which is what you should be using for docker data datasets

Set permissions on the librespeed dataset

/mnt/docker/data/librespeed/appdata/config

Note: the generic datasets permissions don’t exclusively apply to children (in my video you see me set the jellyfin config and transcode permissions, per dataset, not on the jellyfin dataset.

I plan to report a bug on that.

1 Like

Make a different sandbox or vm for testing :wink:

TrueNAS can run TrueNAS in a Vm…

1 Like

thought so, since jailmaker also said to use generic for its dataset. using that logic i assumed it would be the same for the docker stuff as well.

at least i’m on the right track than before xd

i’m pretty much a newbie in regards to acls.

back qnap qts, all i needed to know was, create user and group then assign read/write for the share, and that was about it.

getting better at it :sweat_smile:

was wondering why filebrowser docker container was unhealthy. apparently it wasn’t me. it was a bad release

sometimes when there is a problem, might not be your fault. check the github :smiling_face_with_tear:

Nicely illustrates how to pin a version.

also another case in point why auto update might not be the best option. if you encounter an issue like this just edit the docker compose set it to a working version, then only change back to latest or manually assign a newer version when the fix comes out.

some of these docker containers are abandoned. you have to watch out for that. if that is the case, time to move on.

trivy is a useful docker to discover vulnerabilities in the docker containers you use. this will be a red flag maybe you shouldn’t be using that container :sweat_smile:

i noticed that most often the alpine docker images tend to have little to no vulnerabilities, compared to full blown images that have a ton of them. probably why alpine is quite popular.

old video but it was the only one i saw that directly asked this question

It’s a minimal distro, so less surface area for vulnerabilities AND smaller image size

discovered another flaw in immich design

apparently if you put images into say, albums, they won’t be detected for facials.

why? :sweat:

immich is great but they keep making weird design choices that back themselves into a corner. hopefully they realize this oversight.

that regardless whether they are in an album or not, or even archived, they should ALL be able to have the facial recognition to be shown under faces category.

If there is further filtering to be done, there already is the hide faces over there already.

but anyway i digress x-x; just thought i’d mention this since an app like immich would be an important part of your truenas usage especially if you are a picture hoarder like me :rofl:

in case you use that app and wondered why so few faces are being shown even after running the facial scan for people.

noticed 2 docker containers (dupeguru, qdirstat) which are similar in how they operate (both use some sort of vnc) where i had trouble accessing the datasets. Did not have this issue on qnap qts.

was reading the github manual it said this in regards to permissions. will keep investigating this

another docker dev issue i found. authentik passwordless broke with authentik update

taking a look at jellyfin graphics card

:thinking:

this is the instruction for nvidia.

but it’s already asking to add the repo before u can apt the files needed. which one? and how?

then it wants to mess with user groups and stuff. this is not really critical for me, so hard to proceed.

and bunch of other stuff.

i just use jellyfin without triggering live transcoding, usually when i am unable to use my favourite mpv player with anime4k upscaling.

so i’ll have to pass on this for now :sweat_smile:

tim posted his network homelab setup, and raid owl is sharing his. he too uses truenas

@11:56 even owl doesnt need graphics to accelerate his mediaplayer same like me xd. i’m sure there is a good use case for it, but not really for me since i won’t be using it for that often

Did you try the linuxserver jellyfin image?

i nuked immich docker container and did it from scratch.

from time to time they update their docker compose and .env file so this is probably the case.

i sus it wasn’t fully working though it was functional.

setup was quick as long as i had my docker compose and .env which were updated to the latest

*update

as i suspected, doing a clean install for immich and making sure the compose and .env were correctly up to date. seems a bit more stable now

A very odd issue someone pointed out with people weren’t being generated for a zfs

solution: more ram

*update

figured it out. under machine learning - facial recognition

MIN RECOGNIZED FACES
3

set that to 1. then go to jobs > facial recognition > missing

then go back to people, show all. NOW it shows all the persons you asked it to create but it didn’t. this was why.

If that person is not recognized in multiple counts as specified, they get ignored.

This is why on immich they simply just say add more pics for that person. Alternatively set it to value 1.

So my earlier criticism on immich in regards to the adding persons was simply my lack of understand that you needed to change that setting.

But the gallery creation based on existing folders still remains a valid criticism. For that there is a temporary solution

I don’t know how safe that script is, i’m not a coder, so do your own diligence. I assume it’s ok but you never know.