OLLAMA LLM'S - Upgrade via eGPU or new HW?

Team,

my question is, should I start with a complete new Server-HW to use OLLAMA LLM’S

or

does it make sense/it’s possible to use my current HW (AOOSTAR WTR PRO N150, 16GB) furthermore and connect via eGPU (Razor Core X eGpu) a NVIDIA card to “speedup” OLLAMA LLM’S.

Thank you for your feedback.

Neuro

Are you going to be running the models locally? If so your experience is not going to be a good one even with an external GPU. I would suggest looking into utilizing openrouter with the hardware you listed.

Thank you for your opinion.

Yes, I plan to host LLM’s locally and qwen3 currently works a little bit “slow” but works…

So that was my question to speedup the procedure.

Does it make sense in generally to host your own LLM if your solution would by openrouter?

Depends entirely on what your goals are. I like to play around with free stuff since I’m very new to this. Ultimately you will have to make the choice of what works best for you and what you are wanting to accomplish and choose the model that best fits your goals.

It will perform better than with out a dedicated GPU that is for sure.

1 Like