Electric Eel Nvidia gpu passthrough support

I have a P40 running Ollama with qwen 32b model. I setup a Ubuntu VM and passthru the Tesla. I use it with Home Assistant and response time is at least 10x over cpu. I use a Quadro P620 for Emby transcoding.

roberth58,

I went back to install Ollama as IX container, with open web ui support. It all installed OK and works, however, it’s very slow. In this setup, is there away to pass thru the Nvidia Tesla P4 GPU card that’s installed on the same system. Or is this a defect/feature request that must be made to IX container support?

Thanks!

Hi roberth58. I’m still trying to get my Nvidia Tesla P4 GPU card passed through to an Ollama container, but with no success. TrueNAS sees the card if I try to isolate it, but there doesn’t seem to be a way to get the container to use the GPU resources, even after selecting in the app setup. Also tried to install the Nvidia container tool kit thinking something was missing, but cannot run apt within the TrueNAS shell. I tried unselecting the Nvidia selection in Apps > Settings. When I went back to select it, the box to tick was gone. Possible for you to share the docker compose file you used to get Ollama container working with an Nvidia GPU card? Thanks!

I am not using the ollama app. As I said earlier I setup an ubuntu VM passed my tesla to the vm, installed ollama and the nvidia drivers. I run qwen2.5:32b llm and access from home asistant on the default 11434 port.

I see. Thanks!

Thank you for sharing; I tried the command as well and it worked for me under ElectricEel-24.10.0.2. Once the Nvidia drivers installed my GTX 1660 6GB was detected and able to run nvidia-smi to verify in the TrueNAS shell. Afterward, I was able to install the docker app of Ollama and Open WebUI using the TrueNAS community app and found the GTX 1660 in both cases. Thank you so much for sharing and now I am trying to share my experience for others. :slight_smile: