Please explain video streaming/H264/H265/HLS to a thicko like me

Hello all

(I have two questions, presented right at the end for anyone who wants to skip over my lengthy background explanation).

I am successfully running an official trueNAS “app”, namely Frigate. It’s an NVR assisted by AI in the form of a Coral edgePTU.

This block diagram illustrates my setup, and the info in my signature provides additional details regarding my NAS hardware:

It’s all working; however …

On my main PC I want to watch video clips produced by Frigate but, due to confusion, I think that the people who have tried to help me and who know lots about Frigate might believe that I want to do decoding/transcoding on my TrueNAS box using the basic GPU.

I don’t want to do that. I want to use the significant resources of my main PC to convert/process and ultimately watch the video clips.

It turns out that the video recordings created by Frigate get stored to a mount and I can watch those MP4 videos using (e.g.) VLC or ffplay etc.

It also turns out that the video clip “events”, which are video thumbnails I suppose, live in a different part of the UI. They are used to summarise movement triggering and object recognition, and are supposed to be played back in the browser via “HLS” which is an entirely different mechanism.

The idea is that you get a timeline populated with stills from the video recording and also these stream thumbnails, and you click on either for better detail.

In the webUI served out by frigate, the jpeg thumbnails show up when I click on them, but I can’t do any in-browser playing of the event clips.

I am running a set of up-to-date browsers and I have tested lots of H265 videos & HLS streaming from/inside test web pages and I can render the video easily so I don’t see why I cannot play back these event clips.

(1) When frigate serves its NVR video data to me, those video streams are encoded as HLS and the rendering I want to do - the decoding, or transcoding from H265 to H264 - it can be done on my main machine, can’t it, using the Radeon RX580?

(2) I don’t have to do all the transcoder H265->H264 “heavy lifting” inside my trueNAS box at all, do I?

There are many settings in the config and I am struggling to overcome this hurdle, whilst at the same time making slow progress elsewhere. if yo can help answer my two questions I will be very pleased

Thanks!

EB

Your GPU supports encoding and decoding of both H264 and H265, so I don’t see why you shouldn’t be able to.

As far as I know, you don’t have to.

Do note I know very little about frigate and my answers are based on general knowledge… which I believe to be appropriate given that you posted in General Discussion instead of Apps and Virtualization.

1 Like

You’re exactly right - other people’s general experience and knowledge can only help bolster my own and that’s what I’m after - thanks!

I’d suggest using VideoLAN Client (VLC) to play the HLS streams, ie an m3u8 file from an http server.

Theoretically, the files can be “transmuxed” into HLS, but it sounds like frigate is clipping media on non keyframe boundaries, and it’s not really feasible to provide thise clips in any format other than by transcoding to something.

And that something seems to be HLS.

(Although theoretically QuickTime supports that, but nothing else does really)

1 Like

I don’t have any experience with Frigate, but looking at the video pipeline diagram seen in the below link:

I can say the following based on what I think I know:

  1. The “video segments (retain all)” is your unedulcorated camera stream (raw footage from camera or converted on the fly with FFMpeg) and contains uncut footage of the camera over time.

  2. The “detect stream” is intended to be used for motion detection and seems to run on a few images taken at various interval. In motion or object detection, convolution is used as part of a neural network (aka CNN) and the captured image usually needs to be reduced and have a much lower resolution so that it can be sent through the CNN and compared to the models.
    Native camera resolution such as 1920x1080 isn’t practical for CNN and must be broken down(refer to CNN on wikipedia link:
    Convolutional neural network - Wikipedia)

Once CNN is able to perform image recognition, the result (a drawn frame, a category label and other extra information such as object recognition probability level can be added back to the main video stream as an overlay, either by merging the content back to the “retain all” video segment, or can be placed as individual segments in their own files with time and frame index for matching with the original video recording. The latter is what is referenced as “detection snaphot”, “detection clip”, “object event”…

So if what you are trying to do is regenerate the snapshots or motion detection events out of the “retain all” video segments, then I don’t think it is possible through transcoding.

However, if the “retain all” video segment doesn’t contain the overlay from motion detection, then I would think it should be possible to run Frigate NVR on your main PC and pass the “retain all” video segment as if coming from a camera. I don’t think that would be a good solution as it would only perform the conversion in realtime (ie 30 or 60 fps).

On another note what is the reason for transcoding from H265 to H264? H264 is the open source version of the proprietary H265. They should be somewhat equivalent, I would think, unless H264 lags behind in therm of capability (different version of the protocol).

Kinda… H265 is better than H264 since it requires half as much bandwidth as H264, both for encoding and broadcasting, while retaining the same video quality; it however requires greater processing power and it’s not as supported.

This is excellent - all of these comments are helping. Thank you, all! I have now done some more work which I summarise below and I would once again appreciate your thoughts if you have time.

What is not helping is me forgetting to mention another critical aspect of my puzzle (sorry) which is:

  • another PC (an i7, also linux Mint) cannot render Frigate’s HLS streamed clips

  • two android tablets can render them

  • two android phones can render them

  • an LG smart TV running chrome can render them

and that’s why I can’t work out which bit of the overall system I need to fiddle with to get my linux boxes to work, in particular the “main PC” shown in my earlier diagram.

@Stux - your thoughts are spot on. I did find the m3u8 being served out by and embedded within frigate’s webUI and I rendered it successfully in VLC.

Something which I might still not have grasped properly is related to one of the answers I got in the Frigate forum

Live video and recordings playback use two different technologies (MSE, WebRTC, or jsmpeg for live and HLS for recordings playback) and thus two different player types in frigate

@Stux I think you are also saying the same sort of thing: Frigate has non-linear access to the event clips and thus uses HLS as a streaming transport rather than (say) .MP4 as a lump of “after the event” static data. I think this idea also borrows from what @Apollo said too.

So in summary and following on from further testing this morning thanks to your help, I find that:

All of my machines can play back .MP4s, including from Frigate’s recorded video which gets stored (in my case in a specific TrueNAS mount)

All of my machines can play Frigate’s HLS streams, either
(a) directly in the webUI in the case of the Androids and my LG TV,
or
(b) indirectly in VLC in the case of my two linux boxes when I dig out the m3u8 from the Frigate-served webUI (because the in-browser playback won’t work).

On those two linux boxes, using a browser plug-in (and I’ve successfully tried another) has this morning allowed me to use Chromium to play HLS video urls (m3u8) ‘natively’ (i.e. in-the-browser) from any of the test cases listed here or here

But … neither plug-in on either linux box allows me to play the embedded streams presented in the Frigate webUI.

I don’t know what to conclude and I don’t know where to concentrate my efforts. Please can you suggest your recommendation or best guesses?

In case it helps, here’s a URL showing the Frigate webUI


The idea is that you click on those small thumbnails to play them to let you see the event which has triggered Frigate into object detection and classification. Hovering over the thumbnail does not show you an internal URL so it isn’t simple to see where the HLS comes from (other than by ctrl+shift+inspect).
The blue “download” icon to the right saves it as an mp4 as an additional function on top of automated saving elsewhere.
(“Send to Frigate+” allows you to submit the event to a server somewhere, for better computational processing/analysis).

EB

I stand corrected.
Doing more reading on the subject, H265 is the successor to H264.
H265 is the hardware implementation of the codec while X265 is the software implementation (not sure if it is open source, though) However, according to Wikipedia, the software implementation is royalty free since 2017.

H.265 aka HEVC Ie High Efficiency Video Coding is the name of the standard developed by JVT, the Joint Video Team, made up of ITU (telecoms people) and MPEG (media people)

ITU uses “H.XXX” Naming, MPEG likes Initialisms.

It can be implemented in hardware, or software.

X264 is a software implementation

1 Like

I am not entirely sure if there is any issue other than the Linux where you cannot watch the Frigate recordings does not have an H.265 codec installed.

I had the same issue here with Fedora 40 and installing the codec solved the problem.

2 Likes

Likely unrelated, but Linux has issues with DRM.

It would be great if that is the solution! I shall try to find out.

(The amount of effort I have put in during these last weeks, to trying to explain my problem - possibly ineloquently because this is not an area of expertise for me as you can probably tell! - has been quite trying. Probably for others too …)

Reading on WebRTC, the key implementation of the protocol is to allow Peer-to-Peer communication and provide realtime (low latency) video and audio broadcast.
This means that in order to access the Live Video content of the camera (which isn’t stored locally), the WebRTC protocol establish a direct connection from your main PC to the camera. In this mode of operation, your PC isn’t communication with your server running Frigate.

However, if you need to stream the recorded content ( as opposed to downloading the recorded video files), you need to access the streams via Frigate which encode the recorded video file to meet the specification required to support HLS:

To clarify on Frigate Video Pipeline, as far as I understand, Frigate is extracting frames at regular interval (lower fps than the camera original footage) and is running motion detection by comparing changes in blocks of pixels from one frame to the next.
When motion detection has triggered and event (ie. it detected changes in the frame), the area of the frame that exhibit the change of pixels is sent to the CNN which is implemented on Coral TPU board to perform visual recognition. Frigate may send the image as is, or may perform other steps required for the CNN to process the data.
The results of the CNN is then processed by Frigate which then updates the detected events with what it thinks was recognized (ie. car, cat, dog, burgler…).

For the issue with playback on the i7 PC, I would think the issue could be attributed to the codec being used or available on the PC.
Your Android phones and tablets should support H264 and H265 (especially if you devices are based on Qualcomm’s).
So I would look at whether you i7 PC has the proper libraries and codec to play HLS stream.
Your web browser/router may also block access to the stream.

3 Likes

placeholder @Apollo - this is excellent help and will take me some time to digest. Thanks, and thanks everyone - more later!

edit - I have now had chance to read through the whole thread including the recent info from @Apollo

All of your help has been of huge benefit to me, and has put into context some of the things I was struggling with including nomenclature, and how webRTC / go2rtc works and the idea of HLS being a transport mechanism for h264/265.

As a result I was able to better formulate my questions to the Frigate people who have also helped. I can now appreciate their answers and I more-or-less understand how I might approach this problem (it might not be soluble by me but I will at least understand why).

Thanks all !

EB