Bringing BlueIris to TrueNAS SCALE with Windows VM Possible?

As part of a recent home upgrade with Emporia, I am considering trying to consolidate a NUC for BlueIris into my NAS. The server is based on a embedded D-1537, i.e. it’s pretty slow at 1.7GHz & 8 Cores, but the NUC doesn’t set the world on fire with its i5-8259U CPU @ 2.30GHz either. Both have similar Passmark scores and BlueIris consumes about 10% of its CPU & GPU while it’s running.

Where the two differ is that BlueIris 5 makes good use of Intel QuickSync to drastically reduce workloads. Sadly, the D-1537 lacks Quick Sync altogether. While other motherboard and CPU combinations could be a solution (based on a W-1390T Xeon, for example) I wonder if simply adding a Quick Sync-enabled hardware GPU to the existing server makes more sense and costs less?

Specifically, I am looking at the Intel Arc series, which can be had for around $100 and which could happily sit in one of my NAS’ two empty PCIe 3.0x8 slots. Power consumption by an Arc is not a problem, as the Seasonic 750W PSU has the power and the connectors necessary to make the Arc happy. So then the next question is around Virtualization and passing the Arc through to the VM.

From what I remember, the VM in CORE would only allow up to two cores / 4 threads, or was that a windows license restriction? In Scale, the limit appears to be higher and I thought I’d assign 4 Cores / 8 threads to the Windows VM, along with the ARC hardware GPU and 16GB RAM to mimic the NUC to 50%. Does that seem like a reasonable starting point?

As for Arc Cards, I don’t need anything super fancy as I expect the Arc to basically loaf most of the time. However, while the A310 Elf or the A310 ECO from Sparkle look tasty, the 3.0x8 PCIe slots on the motherboard of the X10SDV-7TP4F prevent either from being mounted directly to the motherboard. Anything longer than x8 will cause the GPU PCIe connector to hit either the HBA heat sink or a pin header.

Something like the R83SL ADT-Link-Graphics-Adapter, would allow me to mount the graphics card adjacent to the motherboard. My case has plenty of slots, so that’s not an issue.

The A310 series seems to be built around a tight power budget, i.e. neither card is supposed to draw more than the 75W a PCIe slot can deliver. Neither appears to have a easily-accessible power connector either. But for $100, the card isn’t that expensive, if it’s even required.

That’s the rub, as allegedly the direct-to-disk write of substreams via BlueIris 5 should allow a lot of cameras to coexist happily even without quick sync. So the best path forward may be to try and set up a viable Blue Iris VM, then see if it can take over the duties of the NUC, even if it doesn’t have Quick Sync from the start.

Even better solution for NVR / AI recognition applications: ignore GPUs and go straight to a tensor processing unit (TPU). Google sells an external, USB3-powered version called Coral for about $75 that will outperform a GPU at minimal power / cost. Codeproject AI (used by BlueIris) has a dedicated build that takes advantage of the TPU, so integration into BlueIris on the NUC is likely plug-and-play.

As a first iteration, I’ll have another go at trying to make it all work on my NAS using frigate in Docker. It seems more polished than ZoneMinder and can also take advantage of a Coral TPU. If that doesn’t work out, I’ll go down the VM / BlueIris path again. Between using sub-streams and the TPU, the puny CPU inside my NAS should be able to handle NVR duties.

An interesting option with the TPUs…

A dual coral m.2 PCIe card, mounted to an adapter, fitting on an m2.2280 slot.

That’s the type of thing that fits in an m.2 carrier card in an x10sdv bifurcating slot.

Watch out… IIRC, frigate docker app claims in its installation / setup window that only the USB version of Coral is presently supported.

[Edit: just pulled up the Frigate installer screen, see above, so I do not know if Frigate is limited to USB-only but I saw that further down the installer specifically mentions opening the USB bus to the Frigate docker container.]

1 Like

That’s a bummer, I’ve got build @stux shared and I was planning to use the TPU mounted via m.2 to OccuLink for Frigate. Will have to come up with something else. When I get to it I’ll have a play and see if I can persuade Frigate otherwise.

If anyones interested. I’m using the dual TPU to m.2 adaptor from this page and it’s working as far as both TPUs are showing in the OS. GitHub - magic-blue-smoke/Dual-Edge-TPU-Adapter: Dual Edge TPU Adapter to use it on a system with single PCIe port on m.2 A/B/E/M slot

1 Like

I have no doubt that it can be done. Just will require a bit more fumbling for the location inside the /dev or whatever tree for the hooks needed.

I have found the Frigate UI to be quasi-unusable for setup as a single error in the camera setup page will nuke same. It’s likely better to modify the setup from the shell, save, and then test in the docker UI?

RTSP paths are another joy to deal with. Under blueIris, the path is working great but the query is via HTTP, etc.

On the other hand, in a VM you might be able to set aside / reserve hardware for coral / blueiris in a way that Frigate in a docker container may struggle with.

1 Like