I wasn’t sure if this is an ‘issue’ or something I can correct myself, though I think future releases should increase the limits.
So far, the new Apps are working great. They do start up and shut down faster with a noticeably lower amount of overheard. I miss the network/cpu/memory stats from Dragonfish, but in github it looks like there is already work in progress to add the stats later.
The issue is that at around ~30 running apps, I’m no longer able to start more:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 469, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 513, in __run_body
rv = await self.middleware.run_in_thread(self.method, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1356, in run_in_thread
return await self.run_in_executor(io_thread_pool_executor, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1353, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 55, in nf
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/app_scale.py", line 36, in start
compose_action(app_name, app_config['version'], 'up', force_recreate=True, remove_orphans=True)
File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/compose_utils.py", line 52, in compose_action
raise CallError(f'Failed {action!r} action for {app_name!r} app: {cp.stderr}')
middlewared.service_exception.CallError: [EFAULT] Failed 'up' action for 'qb-test' app: Network ix-qb-test_default Creating
Network ix-qb-test_default Error
failed to create network ix-qb-test_default: Error response from daemon: all predefined address pools have been fully subnetted
‘docker network inspect bridge’ shows a /16 so I’m not sure where the limitation is. I’m new to docker so may not be looking in the right place.
What’s curious to me is why inspecting each ‘app’ network, they each seem to be given either a /20 or a /16 and I’m not sure why they are given such a large subnet.