Ok, so, using Ollama app on truenas scale.
Versions:-
Ollama:-
App Version:
v0.11.10
Version:
v1.1.21
Open-WebUI:-
App Version:
v0.6.26
Version:
v1.1.18
The issue is that I’m trying to set up Ollama on the server to be accessable via the API for n8n and other services to connect to.
I’ve successfully updated the firefox web browser to connect to my local server instance for AI chat (through port 31028 of WebUI) and that connects up to the Ollama back end through that.
However, directly going to the API for Ollama itself (I have set up an API key for n8n to use) either via n8n ollama nodes or just in the browser using server:11434/api/tags shows no models installed (I have many) (“models” is reported on the .json output).
also, if I use /api/models instead of /api/tags, that shows page not found.
Running the app (docker container i guess) CLI, the ollama list command shows no models, and even when a query is running the ollama ps command doesn’t show anything as loaded.
the nvidia-smi command when run in the container cli is showing that the GPU has a model loaded and is processing.
I have tried spinning up a second instance of the app (the previous one used the 30068 port for some reason) and it picked up on all the models etc straight away through WebUI, so I guess it’s using the same iX application filestore automatically? again though, same issues with the API not reporting, nor CLI showing anything.
so,
A. how do I see through the api what models are available or not?
B. If it’s not reporting the models back via the API, where do I need to look to address this?
C. is this an inherent issue with the API under TRuenas scale?
D. I have Dockge as an app to manage docker instances and is there a simple link to a yaml file to spin up Ollama that way instead? (though dockge shows the “apps” as installed docker containers already).
Any help would be grand. I can get the n8n nodes for ollama to connect when I put in the api details and key, but just not return the model list. It looks though like it’s ollama itself that’s not publishing the details. Is there an environment variable I need to set up? I have tried setting the Ollama_host variable to 0.0.0.0:11434, but that fails as statung the variable is already set…
done a bit of work on github and got some help, it’s the webui that’s operating as the server not ollama itself so I need to seemingly connect to the 31028 port.
but, is there any way of configuring the openwebui to forward api requests as n8n expects or to just present the ollama back end via it’s port 11434?
the webUI is seemingly just getting in the way.
If I use ipaddress:31028/ollama/api/tags in the browser then it does list all the models, but I can’t find a way to present the models out from ollama to n8n.
do I need to pull all the models down internally within ollama itself via the cli?
Ta,
STu.