FFmpeg - Fragmented MP4 with non-fixed frame rate - ffmpeg

I'm creating a fragmented MP4 for the use of playing in Media Source Extensions.
The command line is: ffmpeg.exe -probesize 10000000 -r 10 -i - -vcodec copy -an -f mp4 -reset_timestamps 0 -blocksize 30000 -movflags empty_moov+default_base_moof+frag_keyframe -loglevel debug -
The source of the video is an IP camera streaming H.264.
The configured and expected frame rate is 10FPS but there is no guarantee for 10FPS, for example a frame may get dropped occasionally, or the camera may just not play nice with what it declares.
I have simulate a 10% p-frames drops to emphasize the following issue:
With the above command, the output video plays faster than real-time and this is a problem because the whole pipe is a live stream.
With the 10% frame drop simulation, the effective playback rate it 1.1x.
I don't want to obligate to a fixed frame rate because there is no guarantee for a fixed-rate.
If I remove the -r 10 flag entirely, the MP4 seems to the playing at 2x-3x speed.
Is there a way building the MP4 timestamps in a more dynamic way? for example, giving it the RTP timestamp or somehow telling ffmpeg to build the MP4 with the timestamp of the "feed" time?

Related

Segmenting a video live stream into videos of a fixed length using ffmpeg

I want to use an ffmpeg command to save segments of length 10 seconds from a video being streamed via HTTP using VLC.
The command I used to carry out the task is:
ffmpeg -i http://[Local IPv4 address]:8080 -f segment -segment_time 10 -vcodec copy -acodec copy -reset_timestamps 1 video%03d.mp4
However, the lengths of the output videos I'm receiving are around 8.333 or 16.666 seconds in duration. (The videos video000.mp4, video005.mp4, video010.mp4, video015.mp4... have duration of around 16.666 seconds and the remaining videos have duration of around 8.333 seconds).
I'm aware that the segmentation of input video happens based on the occurrences of keyframes in the video. It appears that the key frames in the video being streamed occur with an interval of around 8 seconds.
Is there a way to obtain video segments that are closer to 10 seconds in duration from the live stream of such a video?
Also, I occasionally get the "Non-monotonous DTS in output stream 0:0" warning while executing the above command. I tried using different flags (+genpts, +igndts, +ignidx) hoping that the warning message would not be displayed, but no luck. Is it possible that there is any correlation between this warning and the issue with lengths of the segments?

FFMPEG bottleneck in relaying data from a dshow camera to stdout PIPE without any processing or conversion

I have a USB camera (FSCAM_CU135) that can encode the video to MJPEG internally and it supports DirectShow. My goal is to retrieve the binary stream of the encoded video as it is (without decoding or preview) and send it to my program for further processing.
I choose to use FFMPEG to read the MJPEG stream and pipe to stdout so that I can read it using Python's subprocess.Popen .
ffmpeg -y -f dshow -vsync 2 -rtbufsize 1000M -video_size 1920x1440 -vcodec mjpeg -i video="FSCAM_CU135" -vcodec copy -f mjpeg pipe:1
At this resolution, the camera is able to capture and transmit at 60 fps.
In this case, I expect FFMPEG to pass the data as fast as possible with no calculation.
With the output of FFMPEG I can tell how fast it moves the data from rtbuffer to the output pipe.
With just one camera, FFMPEG works with no problem and move the data at 60 fps.
However, when I run 2 cameras simultaneously, the cameras still generate data at 60 fps but FFMPEG can only move the data around 55 fps. This means that I am unable to consume the video in realtime and the buffer consumption will be larger over time.
I guess that FFMPEG didn't just simply move the data but did some processing such as searching for the beginning, the end, and the timestamp of each video frame so that it can count frames and report.
Is there a way to force FFMPEG to not doing those things and focus on passing the data only to make it faster?
If I purely use directshow API without FFMPEG, can it be faster?

FFMPEG change fps of audio and subtitles and merge 2 files

I have 30 mkv files which have multiple audio streams and multiple subtitles.
For each file I am trying to: extract the dutch audio and subtitles from that file (25fps)
And merge it with another mkv file (23.976216fps)
With this command it seems like I extract the dutch audio and subtitles into a mkv:
ffmpeg -y -r 23.976216 -i "S01E01 - Example.mkv" -c copy -map 0:m:language:dut S01E01.mkv
But it does not adjust the fps from 25 to 23.976216.
I think I am going to use mkvmerge to merge the two mkv's, but they need to be the same framerate
Anyone knows how I could make this work? Thanks! :)
The frame rate of the video has nothing to do with the frame rate of audio. They are totally independent. In fact there is really no such thing as audio frame rate (well, there is, but that’s a byproduct of the codecs). If you are changing the video frame rate by dropping frames, you are not changing the videos duration, hence you should not change the audios duration. If you are slowing down the video, you must decode the audio, slow it down (likely with pitch correction) and re-encode it.
Something like this would change the audio pitch from standard PAL to NTSC framerate (example valid if your audio track is the 2nd in list, -check with ffmpeg -i video.mkv and see-)
ffmpeg -i video.mkv -vn -map 0:1 -filter:a atempo=0.95904 -y slowed-down-audio-to-23.976-fps.ac3
(23976/25000 = 0.95904 so this is the converted frame rate needed for NTSC films)
Conversely, you can figure out how to speed up NTSC standard frame rate audio to the PAL system (1.0427094).
This trick works, for example, should you want to add a better quality audio track obtained from a different source.

Determine if video stream is a live stream

Is there a way to use ffprobe or ffmpeg to determine if a given stream (for instance http://server/stream or rtmp://server/stream...) is an on-going live stream or is fixed stream (i.e. recorded in the path with no live updates)?
Check if the processing speed exceeds the stream framerate.
ffmpeg -i stream -f null -
Let it run for a minute or so.
You can also seek into the stream,
ffmpeg -ss 60 -i stream -preset superfast -t 5 test.mp4
For pre-recorded content, this should happen quicker than the seek duration, and the start should be the seek point requested. ffmpeg may start at the latest time available if the seek can't be exactly fulfilled.

ffmpeg drop frames on purpose to lower filesize

Our security system records and archives our IP cameras streams with ffmpeg -use_wallclock_as_timestamps 1 -i rtsp://192.168.x.x:554/mpeg4 -c copy -t 60 my_input_video.avi
I run it with crontab every minute so it creates videos of 60 seconds (~15Mb) for each camera every minute. When an intrusion occurs, the camera sends a picture through FTP and a script called by incrontab:
1- forwards immediately the picture by email
2- selects the video covering the minute the intrusion occured, compress it with h264 (to ~2,6Mb) and sends it by email
It is working really well but if a thief crosses the path of various cameras, the connection to the SMTP server is not fast enough so video emails are delayed. I'd like to compress the videos even more to avoid that. I could lower the resolution (640x480 to 320x240 for example) but sometimes 640x480 is handy to zoom on something which looks to be moving...
So my idea is to drop frames in the video in order to lower the filesize. I don't care if the thief is walking like a "stop motion Lego" on the video, the most important is I know there is someone so I can act.
mediainfo my_input_video.avi says Frame rate = 600.000 fps but it is of course wrong. FPS sent by IP cameras are always false because it varies with the network quality; this is why i use "-use_wallclock_as_timestamps 1" in my command to record the streams.
with ffmpeg -i my_input_video.avi -vcodec h264 -preset ultrafast -crf 28 -acodec mp3 -q:a 5 -r 8 output.avi the video is OK but filesize is higher (3Mb)
with ffmpeg -i my_input_video.avi -vcodec h264 -preset ultrafast -crf 28 -acodec mp3 -q:a 5 -r 2 output.avi the filesize is lower (2,2Mb) but the video doesn't work (it is blocked at the first frame).
Creating a mjpeg video (mjpeg = not interlaced frames) in the middle of the process (first exporting to mjpeg with less frames and then exporting to h264) creates same results.
Do you know how I can get my thief to walk like a "stop motion Lego" to lower the filesize to a minimum?
Thanks for any help
What are your constraints file size wise? 2.6MB for 60 seconds of video seems pretty reasonable to me, thats about 350kbps, which is pretty low for video quality.
You need to specify the video bitrate -b:v 125000 (125kbps, should drop you to about 900kb) to control the bitrate/s you want the video encoded at. Your not giving FFMpeg enough hints as to how you want the video handled, so its picking arbitrary values you don't like. As you drop the frame rate, its just using up the buffers allocating more bits to each frame. (one big thing you need to keep in mind with this is, as you stretch the video out over a longer time period the more likely the scene will change significantly require an I frame (full encoded frame vs frame based on previous frame) so reducing the frame rate will help some, but may not help as much as you'd think).
Your "(it is blocked at the first frame)." is most likely an issue with you trying to start decoding a stream when it is not at an I frame and not an issue with your settings.

Resources