Dispatcharr install

Anyone get this working? Trying to add as a custom app as well as custom YAML. The app starts but then stops, nothing in the container logs at all.

I even tried added it without mapping storage just to see if there was some permissions issue with the volume mapping, same issue

I tried but coud not get it to work log file said something about a runtime error and wrong entry point, but since i don’t have a use for that app i gave up

I am not even getting any errors that I can see. It starts, then stops. Nothing in the container live logs. Nothing in app_cycle.log. Nothing to go on.

which compose file did you try? there were three on github the aio, dev and the regular one. With the aio compose i get the same result as you, app deploys but crashes.
The regular one doesn’t even save for me, but the log file shows an error about a wrong docker entry poing.
But as i mentioned that’s all the testing i did

I used the AIO one.

I don’t believe you can use the aio one without also deploying a stand alone redis container which gets referenced in the aio compose file…
In the regular one there’s also entries for a postgres db and redis container

I saw the reference to the DB, I just assumed the AIO one included it. The git says “Recommended! A simple, all-in-one solution — everything runs in a single container for quick setup.”

As far as i can see no it does not.
But i don’t know what

DISPATCHARR_ENV=aio

does. So maybe that env is used in the background to setup the db and redis

ok gave the aio yaml another try and got it running.
I created a new dataset and added user 999 (netdata in truenas) to the acl list with full control.
It seems that the env variable really does setup the db in the background and fails if you don’t have permissions set up for postgres, which adding user 999 did.

services:
  dispatcharr:
    # build:
    #   context: .
    #   dockerfile: Dockerfile
    image: ghcr.io/dispatcharr/dispatcharr:latest
    runtime: nvidia
    container_name: dispatcharr
    ports:
      - 9191:9191
    volumes:
      - /mnt/Poolname/datasetname/dispatcharr_data:/data
    environment:
      - DISPATCHARR_ENV=aio
      - REDIS_HOST=localhost
      - CELERY_BROKER_URL=redis://localhost:6379/0
      - DISPATCHARR_LOG_LEVEL=info
    # Optional for hardware acceleration
    #devices:
    #  - /dev/dri:/dev/dri  # For Intel/AMD GPU acceleration (VA-API)
    # Uncomment the following lines for NVIDIA GPU support
    # NVidia GPU support (requires NVIDIA Container Toolkit)
    #deploy:
    #  resources:
    #      reservations:
    #          devices:
    #              - driver: nvidia
    #                count: all
    #                capabilities: [gpu]


volumes:
  dispatcharr_data:

1 Like

Trying again, as a custom app, and it still just stops. The funny thing is I did not even map storage yet, I was just trying to get the app to stay running then map after. So permissions should not be my issue.

EDIT: Looks like it doesn’t like the custom app. This is my current YAML, which works. Below is the YAML that works, when commenting out the mapping. Add the mapping in and it fails. Likely a permissions issue I would assume. I’ll have to play with it when I can.


services:
  dispatcharr:
    container_name: dispatcharr
    environment:
      - DISPATCHARR_ENV=aio
      - REDIS_HOST=localhost
      - CELERY_BROKER_URL=redis://localhost:6379/0
      - DISPATCHARR_LOG_LEVEL=info
    image: ghcr.io/dispatcharr/dispatcharr:latest
    ports:
      - '9191:9191'
    restart: unless-stopped
    #volumes:
      #- /mnt/Data/Apps/dispatcharr/data:/data

I did give netdata full control on the dataset with no luck.

Don’t know what to tell you, i used the yaml from my last post and this are the permissions i set on the dataset

Thanks, that helps. I’ll play around. Seems like a permissions issue (isn’t it always?).

How would I even apply this yaml file ?

click on discover apps, next to the big blue custom app button in the top right corner are three dots. if you click on them the option to “intsall via yaml” appears. Its kinda hidden i know…