Detect video frame offset to sync videos the smart way - ffmpeg

When recording video, we know that it has for instance 60 fps. But do we also know, what the video offset is? We do know that the time difference between frames is 1/60 s. But when did it start as an absolute value? The problem comes into play when trying to sync two videos of the same scenery, for instance stereo with different cameras that were not frame-locked. My idea is to interpolate two - audio synced - 60 fps videos to for instance 180 fps to then change them to 60 fps again to hopefully have matching frames. This does not work, according to my test. A better solution would be to know the time difference between frames to interpolate based on difference, for instance using a deep learning algorithm.
How can ffmpeg help me here?

Related

Is there any way to "squeeze" all of a video's frames to a constant frame rate using ffmpeg?

I have a video that is very jumpy. It freezes after every couple of seconds, and running the MediaInfo reveals a frame rate of 23.701fps. The original video has 23.976fps and is shorter than the faulty version by a couple of seconds. This plays fine on VLC. They both have the same size so it looks like the faulty one has some frames stretched over to fit the additional seconds. Is there an ffmpeg command I can use to repair this video?
Thanks.

How to calculate the number of frames extracted at certain FPS?

I have a bunch of videos in a folder at different fps. I need to calculate the number of frames that can be extracted at 0.5 fps.
Its kind of a simple mathematical problem, I need help on the formula.
Thanks
if the duration of video is t and I am assuming that's when the ffmpeg will terminate then it should ideally return you t/2 frames.

Raspberry Pi 3 PiCamera Still Frame Rate

I'm working with a Raspberry Pi 3 that has a ribbon PiCamera. My problem is that I cannot get the still (not video) frame rate to be workable. In my application, the camera acts like a scanner using only a single row of each frame to watch things go by. While the concept is fine, what's killing me is the frame rate which I cannot get above 30 FPS.
A perfect solution would be for someone out there taking the raspicam source, stripped it down and tuning it for speed, and bolting it up to OpenCV. Has anyone done this? Did it work?
The Ava Group in Spain (https://www.uco.es/investiga/grupos/ava/node/40) took an initial stab at this, but their still frame rate is also limited to 30 FPS.

In ffmpeg, can I specify time in frames rather than seconds?

I am programatically extracting multiple audio clips from single video files using ffmpeg.
My input data (start and end points) are specified in frames rather than seconds, and the audio clip will be used by a frame-centric user (an animator). So, I'd prefer to work in frames throughout.
In addition, the framerate is 30fps, which means I'd be working in steps of 0.033333 seconds, and I'm not sure it's reasonable to expect ffmpeg to trim correctly given such values.
Is it possible to specify a frame number instead of an ffmpeg time duration for start point (-ss) and duration (-t)? Or are there frame-centric ffmpeg commands that I've missed?
Audio frame or sample numbers don't correspond to video frame numbers, and I don't see a way to specify audio trim points by referencing video frame indices. Nevertheless, see this answer for more details.

Video Slideshow from png files + mp3 audio

I have a bunch of .png frames and a .mp3 audio file which I would like to convert into a video. Unfortunately, the frames do not correspond to a constant frame rate. For instance, one frame may need to be displayed for 1 second, whereas another may need to be displayed for 3 seconds.
Is there any open-source software (something like ffmpeg) which would help me accomplish this? Any feedback would be greatly appreciated.
Many thanks!
This is not an elegant solution, but it will do the trick: duplicate frames as necessary so that you end up with some resulting (fairly high) constant framerate, 30 or 60 fps (or higher if you need higher time resolution). You simply change which frame is duplicated at the closest new frame to the exact timestamp you want. Frames which are exact duplicates will be encoded to a tiny size (a few bytes) with any decent codec, so this is fairly compact. Then just encode with ffmpeg as usual.
If you have a whole lot of these and need to do it the "right" way: you can indicate the timing either in the container (such as mp4, mkv, etc) or in the codec. For example in an H.264 stream you will have to insert SEI messages of type pic_timing to specify the timing of each frame. Alternately you will have to write your own muxer relying on a container library such as Matroska (mkv) or GPAC (mp4) to indicate the timing in the container. Note that not all codecs/containers support arbitrarily variable frame rate. Only a few codecs support timing in the codec. Also, if timing is specified in both container and codec, the container timing is used (but if you are muxing a stream into a container, the muxer should pick up the individual frame timestamps from the codec).

Resources