Overcome MPEG 2 TS Link Buildup Latency - ffmpeg

I have built a TS audio video link. I send encoded raw audio and video from one computer to another using TS over Ethernet. I am using FFMPEG libraries for this. Everything works fine except there is a latency build up with time.
When i play the raw video content at both encode and decoder ends, initially there is very little visible latency(less than a second). But when time increases the difference of the two videos slowly increase. I am playing the received video using a fixed frame rate.
As i have understood it is possible to us the PCR value to overcome this latency build up problem but it is not clear to me how to use that value..
Can someone kindly tell me how to use the PCR value of a TS stream to avoid latency build up?
Thank you.

Related

libav MPEGTS demuxing - handle loop/discontinuity

I'm writing a video/audio player that uses libav/ffmpeg for demuxing and decoding MPEGTS streams over UDP. One problem that I'm dealing with is that sometimes the stream is looping and when it loops, my player breaks down.
The issue is that once the stream loops, the new packets have widely different dts/pts. My player is relying on pts for video - audio synchronisation so it's important that I can handle pts properly.
Whenever the server loops the stream, it sends a discontinuity flag, which I can confirm is being correctly received by libav mpegts demuxer (I did some digging in the code and inspected the debug logs). However, it seems to me that the demuxer doesn't act on the discontinuity flag much. In other words, from the point of view of the user I can't tell that there's a discontinuity, apart from the dramatic jump in dts/pts.
Is there a way I can reliably tell that there was a discontinuty so I can recalculate my timestamps and continue playback smoothly?
I had similar issues with libav's TS demuxer and gave up using it. Instead I found this project which lets you have greater control on demux process.
https://github.com/mmoanis/mpegts_demux
Eventually I came up with a solution. Not sure it's the best way to deal with this, but it works for me.
It's true that packets demuxed by libav don't contain any specific information about discontinuity occuring. However, if discontinuity occurs, that means that there will be a sudden change in dts/pts between the 2 packets.
In other words, discontinuity means the content changed. Since it changed, the timestamps are gonna be very different.
The timestamp can only change in 2 ways:
It's gonna be lower than the previous one, which is not allowed in a continuous stream
it's gonna be much greater than the previous one
Case 1. is pretty straight forward. Case 2. works if you pick high enough threshold. I picked 1 minute. So if I receive a packet and the timestamp difference between it and the previous packet is more than 1 minute, I consider that a discontinuity.
This decision process is based around a "timestamp", but what timestamp am I talking about? The packets contain 2 timestamps - dts and pts. However, it seems that pts is usually out of order. But dts is always increasing so I based my logic around dts.

How to use libvlc for low-frame-rate rtsp stream decoding

I am using libvlc for RTSP h.264 bitstream decoding and display in PC. For the best experience (i.e. low latency), I uses the following options
:file-caching=0
:tcp-caching=0
:rtsp-caching=0
:network-caching=0
:clock-jitter=0
:avcodec-fast
With these parameters, the latency is acceptable in comparison to the open-source project "ONVIF device manager"(ODM) where FFMPEG is used for decoding.
When RTSP server deliver low frame-rate RTSP stream, (1 frame/second). These configuration will freeze after displaying a few frames (3-5 frames).
I have tried 2 different approaches
Disable synchronization by
:clock-synchro=0
This will enable the decoding process to go on, however, an accumulated period of time lagging could be observed.
Use network-cache
My experiments shows that
:network-caching=1200
Will make decoding go smoothly, however, the latency is over 1-2 second in comparison to ODM.
Is there a way to handle the low frame rate issue in libvlc without providing such big latency?

Decoding of 4k video causing overflow of buffer in a multi process application

I am quite new in working with FFMPEG. Please excuse me if I made any mistake while explaining my problem.
I am working on an application where PCAP captures 1316 bytes of data split that into 188 bytes and writes it into it to corresponding shared memory. The other process reads the data and decode using ffmpeg and process it.
This whole process is working fine with H264 and HEVC video.
The problem comes with 4k video. While decodig the 4k video I am encountering buffer overflow.
I have observed that on a machine with high CPU frequence the overflow rate was less. Hence If I can reduce the time taken to decode and process the video data at ffmpeg might solve my problem. I am not able to find a good solution for my problem.
Is there any way to speed up the process at ffmpeg decoder?, is thre any other way to supress the overflows?

Look for fastest video encoder with least lag to stream webcam streaming to ipad

I'm looking for the fastest way to encode a webcam stream that will be viewable in a html5 video tag. I'm using a Pandaboard: http://www.digikey.com/product-highlights/us/en/texas-instruments-pandaboard/686#tabs-2 for the hardware. Can use gstreamer, cvlc, ffmpeg. I'll be using it to drive a robot, so need the least amount of lag in the video stream. Quality doesn't have to be great and it doesn't need audio. Also, this is only for one client so bandwidth isn't an issue. The best solution so far is using ffmpeg with a mpjpeg gives me around 1 sec delay. Anything better?
I have been asked this many times so I will try and answer this a bit generically and not just for mjpeg. Getting very low delays in a system requires a bit of system engineering effort and also understanding of the components.
Some simple top level tweaks I can think of are:
Ensure the codec is configured for the lowest delay. Codecs will have (especially embedded system codecs) a low delay configuration. Enable it. If you are using H.264 it's most useful. Most people don't realize that by standard requirements H.264 decoders need to buffer frames before displaying it. This can be upto 16 for Qcif and upto 5 frames for 720p. That is a lot of delay in getting the first frame out. If you do not use H.264 still ensure you do not have B pictures enabled. This adds delay to getting the first picture out.
Since you are using mjpeg, I don't think this is applicable to you much.
Encoders will also have a rate control delay. (Called init delay or vbv buf size). Set it to the smallest value that gives you acceptable quality. That will also reduce the delay. Think of this as the bitstream buffer between encoder and decoder. If you are using x264 that would be the vbv buffer size.
Some simple other configurations: Use as few I pictures as possible (large intra period).
I pictures are huge and add to the delay to send over the network. This may not be very visible in systems where end to end delay is in the range of 1 second or more but when you are designing systems that need end to end delay of 100ms or less, this and several other aspects come into play. Also ensure you are using a low latency audio codec aac-lc (and not heaac).
In your case to get to lower latencies I would suggest moving away from mjpeg and use at least mpeg4 without B pictures (Simple profile) or best is H.264 baseline profile (x264 gives a zerolatency option). The simple reason you will get lower latency is that you will get lower bitrate post encoding to send the data out and you can go to full framerate. If you must stick to mjpeg you have close to what you can get without more advanced features support from the codec and system using the open source components as is.
Another aspect is the transmission of the content to the display unit. If you can use udp it will reduce latency quite a lot compared to tcp, though it can be lossy at times depending on network conditions. You have mentioned html5 video. I am curious as to how you are doing live streaming to a html5 video tag.
There are other aspects that can also be tweaked which I would put in the advanced category and requires the system engineer to try various things out
What is the network buffering in the OS? The OS also buffers data before sending it out for performance reasons. Tweak this to get a good balance between performance and speed.
Are you using CR or VBR encoding? While CBR is great for low jitter you can also use capped vbr if the codec provides it.
Can your decoder start decoding partial frames? So you don't have to worry about framing the data before providing it to the decoder. Just keep pushing the data to the decoder as soon as possible.
Can you do field encoding? Halves the time from frame encoding before getting the first picture out.
Can you do sliced encoding with callbacks whenever a slice is available to send over the network immediately?
In sub 100 ms latency systems that I have worked in all of the above are used. Some of the features may not be available in open source components but if you really need it and are enthusiastic you could go ahead and implement them.
EDIT:
I realize you cannot do a lot of the above for a ipad streaming solution and there are limitations because of hls also to the latency you can achieve. But I hope it will prove useful in other cases when you need any low latency system.
We had a similar problem, in our case it was necessary to time external events and sync them with the video stream. We tried several solutions but the one described here solved the problem and is extremely low latency:
Github Link
It uses gstreamer transcode to mjpeg which is then sent to a small python streaming server. This has the advantage that it uses the tag instead of so it can be viewed by most modern browsers, including the iPhone.
As you want the <video> tag, a simple solution is to use http-launch. That
had the lowest latency of all the solutions we tried so it might work for you. Be warned that ogg/theora will not work on Safari or IE so those wishing to target the Mac or Windows will have to modify the pipe to use MP4 or WebM.
Another solution that looks promising, gst-streaming-server. We simply couldn't find enough documentation to make it worth pursuing. I'd grateful if somebody could ask a stackoverflow question about how it should be used!

BackgroundAudioPlayer- Buffering & MediaStreamSource

I have created a MediaStreamSource to decode an live internet audio stream and pass it to the BackgroundAudioPlayer. This now works very well on the device. However I would now like to implement some form of buffering control. Currently all works well over WLAN - however i fear that in live situations over mobile operator networks that there will be a lot of cutting in an out in the stream.
What I would like to find out is if anybody has any advice on how best to implement buffering.
Does the background audio player itself build up some sort of buffer before it begings to play and if so can the size of this be increased if necessary?
Is there something I can set whilst sampling to help with buffering or do i simply need to implement a kind of storeage buffer as i retrieve the stream from the network and build up a substantial reserve in this before sampling.
What approach have others taken to this problem?
Thanks,
Brian
One approach to this that I've seen is to have two processes managing the stream. The first gets the stream and writes it a series of sequentially numbered files in Isolated Storage. The second reads the files and plays them.
Obviously that's a very simplified description but hopefully you get the idea.
I don't know how using a MediaStreamSource might affect this, but from experience with a simple Background Audio Player agent streaming direct from remote MP3 files or MP3 live radio streams:
The player does build up a buffer of data received from the server before it will start playing your track.
you can't control the size of this buffer or how long it takes to fill that buffer (I've seen it take over a minute of buffering in some cases).
once playback starts if you lose connection or bandwidth goes so low that your buffer is emptied after the stream has started then the player doesn't try and rebuffer the audio, so you can lose the audio completely or it can cut in or out.
you can't control that either.
Implementing the suggestion in Matt's answer solves this by allowing you to take control of the buffering and separates download and playback neatly.

Resources