Apps and Datasets Missing in UI - Fangtooth

Hi. I recently upgraded to Fangtooth. It’s been working fine but after going to the apps section a few times I noticed that I can no longer see any apps. Also, I can no longer see anything on the datasets page either. I have tried signing in and out (even though this is a server side error). I have not tried rebooting yet because I am resilvering (perfect storm right now).

I am happy to provide any other information. I tried to submit a bug report through the UI 3 times however I was stuck on a page saying it was creating a ticket (which it never did).

Also it would not let upload images to the last post. The gods are not in my favor about reporting this bug!

“An error occurred: Sorry, you can’t embed media items in a post.”

I can’t even post links to where the images are on twitter…

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
    self.dialect.do_execute(
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
    cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: services_catalog.label

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 323, in process_method_call
    result = await method.call(app, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 40, in call
    result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 883, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 703, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 596, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py", line 96, in wrapped
    result = func(*args)
             ^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/crud.py", line 70, in query
    available_apps_mapping = self.middleware.call_sync('catalog.train_to_apps_version_mapping')
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1013, in call_sync
    return methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalog/apps_details.py", line 32, in train_to_apps_version_mapping
    for train, train_data in self.apps({
                             ^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py", line 96, in wrapped
    result = func(*args)
             ^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalog/apps_details.py", line 67, in apps
    catalog = self.middleware.call_sync('catalog.config')
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1002, in call_sync
    return self.run_coroutine(methodobj(*prepared_call.args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1042, in run_coroutine
    return fut.result()
           ^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py", line 88, in wrapped
    result = await func(*args)
             ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/service/config_service.py", line 110, in config
    return await self._get_or_insert(self._config.datastore, options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/service/config_service.py", line 129, in _get_or_insert
    await self.middleware.call('datastore.insert', datastore, {})
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/connection.py", line 106, in execute_write
    result = self.connection.execute(sql, binds)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1365, in execute
    return self._exec_driver_sql(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1669, in _exec_driver_sql
    ret = self._execute_context(
          ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
    util.raise_(
  File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_
    raise exception
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
    self.dialect.do_execute(
  File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: services_catalog.label
[SQL: INSERT INTO services_catalog (label) VALUES (?)]
[parameters: ('',)]
(Background on this error at: https://sqlalche.me/e/14/gkpj)

1 Like

Hello sir. Love your videos.

If you go to the System Tab and then click Advanced can you get a debug to generate? Its a blue button on the top right. Sent you a PM also.

1 Like

Thank you so much! Sent! :grinning:

Ya, that is new user spam prevention, hopefully someone will elevate your status soon. Great vids by the way.

1 Like

@technotim
When was the first sign of trouble for you?

So first thing, we are getting some read errors here just want to make sure you are aware. Was this drive known-bad before you upgraded?

  pool: storage0
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Wed Apr 16 15:17:54 2025
	19.8T / 30.8T scanned at 1.14G/s, 2.89T / 14.5T issued at 170M/s
	2.89T resilvered, 19.98% done, 19:49:46 to go
config:

	NAME                                        STATE     READ WRITE CKSUM
	storage0                                    DEGRADED     0     0     0
	  mirror-0                                  ONLINE       0     0     0
	    9e832205-267c-4c0e-9d0e-b31a6422e471    ONLINE       0     0     0
	    d538b37b-5735-4ea1-a3f9-d097688a6240    ONLINE       0     0     0
	  mirror-1                                  ONLINE       0     0     0
	    56612791-5522-49fe-8d36-b90e2547a7b1    ONLINE       0     0     0
	    26825c51-6368-47ef-a22c-94dade53d772    ONLINE       0     0     0
	  mirror-2                                  ONLINE       0     0     0
	    7ab9dd41-6552-46b7-8e28-31be1189f8bf    ONLINE       0     0     0
	    3f805a81-387d-4b8d-89ed-b5863f62b039    ONLINE       0     0     0
	  mirror-3                                  ONLINE       0     0     0
	    764f9770-d33a-4289-9526-2f9540d9f1ab    ONLINE       0     0     0
	    a9715f92-1f37-49dc-b8da-5186c12e6ecd    ONLINE       0     0     0
	  mirror-6                                  DEGRADED     0     0     0
	    replacing-0                             DEGRADED     3     0     0
	      6f000df2-11d8-4ca8-a26e-57d564672231  REMOVED      0     0     0
	      c2c67ded-0a89-4a3e-80de-69d5ae0c2899  ONLINE       0     0  1022  (resilvering)
	    8ee71a22-479b-4a69-8cc5-70fc4f6f3436    ONLINE       0     0     0
	special	
	  mirror-10                                 ONLINE       0     0     0
	    1f320af8-e3e3-4d6b-915b-547646c7d6e7    ONLINE       0     0     0
	    8fd19ffb-197b-4f74-8a9e-a44292a85d43    ONLINE       0     0     0
	logs	
	  74610f32-2398-4013-a64a-55842c2a9203      ONLINE       0     0     0


Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#655 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#655 CDB: Read(16) 88 00 00 00 00 04 a2 ec 8b 58 00 00 07 f8 00 00
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=10195598487552 size=1044480 flags=2148533416
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#666 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=4491337142272 size=131072 flags=3146112
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=4491337273344 size=131072 flags=3146112
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=4491337404416 size=131072 flags=3146112
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#640 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#666 CDB: Read(16) 88 00 00 00 00 04 a2 ec 93 50 00 00 04 c0 00 00
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] tag#640 CDB: Read(16) 88 00 00 00 00 04 a2 ec 83 68 00 00 07 f0 00 00
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=10195599532032 size=622592 flags=2148533416
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: zio pool=storage0 vdev=/dev/disk/by-partuuid/6f000df2-11d8-4ca8-a26e-57d564672231 error=5 type=1 offset=10195597447168 size=1040384 flags=2148533416
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] Synchronizing SCSI cache
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: sd 0:0:0:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: mpt3sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221100000000)
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: mpt3sas_cm0: removing handle(0x0009), sas_addr(0x4433221100000000)
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: mpt3sas_cm0: enclosure logical id(0x500304801ca1ef04), slot(0)
Apr 16 20:09:05 andromeda.local.techtronic.us kernel: mpt3sas_cm0: enclosure level(0x0000), connector name(     )

I can see some recent pool changes, can you help me understand a bit more whats going on here? Were you just testing differant LOG device configurations? The last one is where you started resilvering.

2025-04-14.17:44:18 py-libzfs: zpool add storage0 log mirror /dev/disk/by-partuuid/31ac6d18-3756-4ba1-b09a-b1b1f5ebe626 /dev/disk/by-partuuid/ad884242-e1ad-4e50-8295-04e3d035a53b
2025-04-14.21:11:50 py-libzfs: zpool remove storage0None
2025-04-14.21:12:27 py-libzfs: zpool add storage0 log /dev/disk/by-partuuid/4b7dc2f0-7ddd-41d1-a2e8-3d5f034e85ba
2025-04-16.14:29:43 py-libzfs: zpool remove storage0 /dev/disk/by-partuuid/4b7dc2f0-7ddd-41d1-a2e8-3d5f034e85ba
2025-04-16.14:31:22 py-libzfs: zpool add storage0 log /dev/disk/by-partuuid/74610f32-2398-4013-a64a-55842c2a9203
2025-04-16.15:17:57 py-libzfs: zpool replace storage0 10957808227216399 /dev/disk/by-partuuid/c2c67ded-0a89-4a3e-80de-69d5ae0c2899

Between you making changes to the log device, and you replacing the bad drive, I see zdb actually core dumped. It would be helpful if you could elaborate any additional context you have.

Apr 16 15:06:05 andromeda.local.techtronic.us systemd-coredump[41011]: Process 37993 (zdb) of user 0 dumped core.

Module libudev.so.1 from deb systemd-252.33-1~deb12u1.amd64
Stack trace of thread 37993:
#0  0x00007fb557859ebc n/a (libc.so.6 + 0x8aebc)
#1  0x00007fb55780afb2 raise (libc.so.6 + 0x3bfb2)
#2  0x00007fb5577f5472 abort (libc.so.6 + 0x26472)
#3  0x00007fb557eb8f32 n/a (libzpool.so.6 + 0x59f32)
#4  0x000055cd33d9064e n/a (zdb + 0x1064e)
#5  0x000055cd33d98e66 n/a (zdb + 0x18e66)
#6  0x000055cd33d98ffe n/a (zdb + 0x18ffe)
#7  0x000055cd33da189d n/a (zdb + 0x2189d)
#8  0x000055cd33d8aa2a n/a (zdb + 0xaa2a)
#9  0x00007fb5577f624a n/a (libc.so.6 + 0x2724a)
#10 0x00007fb5577f6305 __libc_start_main (libc.so.6 + 0x27305)
#11 0x000055cd33d8ad41 n/a (zdb + 0xad41)

Did you have this problem before the disk problem? Or did you notice them at the same time?

I think the App issue started here?

[2025/04/16 16:43:43] (WARNING) middlewared.process_method_call():357 - Exception while calling app.latest(*[]) @cee:{"TNLOG": {"exception": "Traceback (most recent call last):\n  

In any case, I would like to start with a simple reboot and then if you can send me another debug after that would be great. You can feel free to wait until the resilver is completed if you prefer, but you should not have to.

Sorry about the lack of context. I upgraded today. I added a few NVMes today, one did have a read error but I ignored it since I wasn’t going to use it right away.
Shortly after I decided to replace a drive that was giving SMART errors about once a month so this was planned.
The apps and storage section did work when I was resilvering (as far as I know) but once I refreshed the apps page that’s when I started seeing it. My wife is watching something on Plex :joy: and something is recording. I will reboot and grab a debug here soon! Thank you!

No need to be sorry. Just trying to follow the timeline to see if I can find some context clues here.

No rush. A great man once taught me:
“Even though we don’t have SLAs, all homelab advenures must respect the WAF” (Wife Acceptance Factor)

2 Likes

Totally! She’s totally fine with me rebooting anytime, she’s been putting up with this for years :sweat_smile: however The Amazing Race is recording and I don’t want to reboot until that’s done, which should be about10 min.

OK, I rebooted and everything is fine. I DM’d you a new debug bundle. I would have rebooted as part of troubleshooting but I was hoping to help with a bug and leave my system in the state where it was.

1 Like

Looking at it now, and can confirm middleware seems happy again.

To me it read like a hardware issue with your NVME drive (the slog) that caused middleware to end up in an unclean state. I don’t think this one will be easily reproducible, but I should have what I need from these debugs.

Thanks man, sorry for the trouble.

1 Like

No trouble at all and thank you so much!