Electric Eel Nvidia gpu passthrough support

Hi Chris:

Sadly, I updated the Docker compose as you suggested, see below, but still no performance improvement is evident. Thoughts?

Thanks again!

-Rodney

name: ollama-local
services:

ollama:
container_name: ollama
restart: unless-stopped
image: ollama/ollama:latest
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=GPU-7d073f23-6ec9-13d5-ea9b-52bcebf1f0a9
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- “/mnt/storage/windows_share/Apps/Ollama:/root/.ollama”
ports:
- 11434:11434
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: [‘all’]
capabilities: [gpu]
healthcheck:
test: ollama list || exit 1
interval: 10s
timeout: 30s
retries: 5
start_period: 10s
networks:
- ollama_network

ollama-models-pull:
container_name: ollama-models-pull
image: curlimages/curl:latest
command: >
http://ollama:11434/api/pull -d ‘{“name”:“llama3.1”}’
depends_on:
ollama:
condition: service_healthy
networks:
- ollama_network

networks:
ollama_network:
driver: bridge