Trying to configure AMD GPU RX 6700XT to be visible for Open Web UI application.
I understand that it is required to install rocm driver for it (I didn’t know if it should be done in TrueNas Scale or inside application), but still it is not visible and showing the following errors:
2025-05-22 18:28:36.352664+00:00time=2025-05-22T21:28:36.352+03:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
2025-05-22 18:28:36.353980+00:00time=2025-05-22T21:28:36.353+03:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
2025-05-22 18:28:36.353998+00:00time=2025-05-22T21:28:36.353+03:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
How can I add AMD GPU to be visible for Open Web UI?
Passthrough GPUs was configured in TrueNas Scale UI
P.S. I’ve managed to read a bit (not my strongest part) and downloaded ROCM image for ollama instead of the usual one.
Currently I can see that the GPU is visible, but the following error appears:
2025-05-23 07:38:38.445616+00:00time=2025-05-23T07:38:38.445Z level=ERROR source=amd_linux.go:407 msg="amdgpu devices detected but permission problems block access: kfd driver not loaded. If running in a container, remember to include '--device /dev/kfd --device /dev/dri'"
I have also managed to use HSA_OVERRIDE_GFX_VERSION, because I have 6700xt, and the minimal supported one is 6800.
Currently I need to understand how can I include /dev/kfd and /dev/dri?
I’m replying since I had the same issue as OP and figured it out. I wanted it documented in case anyone else runs into the same issue. I had this error when using the Ollama app, but I’m sure it is applicable to OpenWebUI as well. You can change the Docker image in the YAML and it should work.
This is awesome, thank you. I used a modified version of this to get it working but, I am having issues with Ollama communicating to other apps like Paperless-AI and Open WebUI. The default API port is 11434 and using a curl command to ask a test question I get a “failed to connect to server”.
Update: Fixed, had to add OLLAMA_HOST: under environment.
If it helps someone, I was able to get it running with my 6900XT by switching to custom app, then grabbing the latest RC ROCM image. I did not had to configure much else, just had to specify my ROCM device since I also have an AMD iGPU
services:
ollama:
cap_drop:
- ALL
deploy:
resources:
limits:
cpus: '12'
memory: 24576M
devices:
- /dev/dri/renderD128:/dev/dri/renderD128
- /dev/kfd:/dev/kfd
environment:
NVIDIA_VISIBLE_DEVICES: void
OLLAMA_HOST: 0.0.0.0:30068
ROCR_VISIBLE_DEVICES: 0
TZ: Etc/UTC
UMASK: '002'
UMASK_SET: '002'
group_add:
- 44
- 107
- 568
healthcheck:
interval: 30s
retries: 5
start_interval: 2s
start_period: 15s
test:
- CMD
- timeout
- '1'
- bash
- '-c'
- cat < /dev/null > /dev/tcp/127.0.0.1/30068
timeout: 5s
image: ollama/ollama:0.12.4-rc6-rocm
platform: linux/amd64
ports:
- mode: ingress
protocol: tcp
published: 30068
target: 30068
privileged: False
restart: unless-stopped
security_opt:
- no-new-privileges=true
stdin_open: False
tty: False
user: '0:0'
volumes:
- bind:
create_host_path: False
propagation: rprivate
read_only: False
source: /mnt/APPS_DRIVE/APPS/OLLAMA
target: /root/.ollama
type: bind
volumes: {}
x-notes: >
# Ollama
## Security
**Read the following security precautions to ensure that you wish to continue
using this application.**
---
### Container: [ollama]
#### Running user/group(s)
- User: root
- Group: root
- Supplementary Groups: apps
---
## Bug Reports and Feature Requests
If you find a bug in this app or have an idea for a new feature, please file
an issue at
https://github.com/truenas/apps
x-portals: []