[Accepted] App Request: Uptime Kuma

Problem/Justification

Kuma (クマ/熊) means bear :bear: in Japanese.
A little bear is watching your website.:bear::bear::bear:

Uptime Kuma is not available in the Electric Eel app catalog

It is easy to install.

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1

Impact

Uptime Kuma has to be installed manually.

Users will miss out on the awesomeness of uptime kuma.

User Story

User would like to install Uptime Kuma. Uptime Kuma is not in the catalog. User is sad.

1 Like

And next we install oneko to play with Okuma?

1 Like

+1 Uptime Kuma and Scrutiny were the only reasons I used TrueCharts.

Request for this is in the queue on the Apps repo:

2 Likes

So, can the feature request be accepted then so we get our votes back :slight_smile:

Now that Uptime Kuma is available in the catalog (user is happy), can we get the votes released please? :slight_smile:

1 Like

PS: Scrutiny is also available now

1 Like

Super excited. I love @joeschmuck multireport, but having basic disk health stats from SMART in a web app UI is amazing thanks for letting me know @Stux

Guess I need to check these out, Scrutiny looks interesting, Kuma I couldn’t get a feel for it from the website. Guess I will install soon.

I haven’t been able to get it to work :-\

4-11-13 03:15:15.012553+00:00s6-rc: info: service s6rc-oneshot-runner: starting
2024-11-13 03:15:15.018647+00:00s6-rc: info: service s6rc-oneshot-runner successfully started
2024-11-13 03:15:15.018961+00:00s6-rc: info: service fix-attrs: starting
2024-11-13 03:15:15.025921+00:00s6-rc: info: service fix-attrs successfully started
2024-11-13 03:15:15.026223+00:00s6-rc: info: service legacy-cont-init: starting
2024-11-13 03:15:15.032061+00:00cont-init: info: running /etc/cont-init.d/01-timezone
2024-11-13 03:15:20.015043+00:00s6-rc: fatal: timed out
2024-11-13 03:15:20.019007+00:00s6-sudoc: fatal: unable to get exit status from server: Operation timed out

Is that from the Applications or did you manually install? Just curious.

You are one of the people I’m sure who could figure out how to get something working.

Its from the apps, which I’m testing.

Not really sure what’s wrong, and I figured I shouldn’t have had to dig that deep.

If it were manual I’m sure I would’ve fixed it too :slight_smile:

Just curious (and I know we are hijacking this thread, we can move it if you want) did set it up with a hostpath or an ixvolume?

I was lazy and used an ixvolume this time (which is atypical for me) and it worked. Tho you have to manually add every disk one at a time.

It would be good if we could split off the scrutiny discussion

I tried both options.

Using the following yaml works fine

services:
  scrutiny:
    cap_add:
      - SYS_RAWIO
    container_name: scrutiny
    devices:
      - /dev/sda
      - /dev/sdb
    image: ghcr.io/analogj/scrutiny:master-omnibus
    ports:
      - '8089:8080'
      - '8090:8086'
    volumes:
      - /run/udev:/run/udev:ro
      - /mnt/tank/docker/scrutiny/config:/opt/scrutiny/config
      - /mnt/tank/docker/scrutiny/db:/opt/scrutiny/influxdb

So far,

I’ve found it works if I create the app with just a single disk device, but if I create it with two disks (ie /dev/sda and /dev/sdb), then it fails with a timeout

Also, if I add a second device, after sucessfully creating the app… then it fails with same timeout :-\

So,

If you examine the rendered yaml…

healthcheck:
      interval: 10s
      retries: 30
      start_period: 10s
      test: >-
        curl --silent --output /dev/null --show-error --fail
        http://127.0.0.1:8080/api/health
      timeout: 5s

Changing the timeout to 10s resolves. So it seems the timeout on the healthcheck is a bit too aggressive… possibly at startup…

1 Like

Bug report filed

https://ixsystems.atlassian.net/browse/NAS-132480

Good luck with that… (Make a Feature Request, maybe? :thinking: )