ffmpeg - make a seamless loop with a crossfade - ffmpeg

I want to apply a crossfade to the last x frames of a video with the first x frames in order to obtain a seamless loop.
How can I do that?

Let's say your video is 30 seconds long and your fade is 1 second long. Your command would be
ffmpeg -i video.mp4 -filter_complex
"[0]split[body][pre];
[pre]trim=duration=1,format=yuva420p,fade=d=1:alpha=1,setpts=PTS+(28/TB)[jt];
[body]trim=1,setpts=PTS-STARTPTS[main];
[main][jt]overlay" output.mp4
The video is split into two identical streams. The first is trimmed to just the first second, has an alpha channel added, and then faded. The last filter on the first stream delays it by 28 seconds since the final output will have trimmed off the first second of the original clip and overlap with the last second. The 2nd stream is trimmed to start at t=1 and the processed first stream is overlaid on the 2nd. Since the alpha channel is faded in the first stream, it crossfades in.

Related

Segmenting a video live stream into videos of a fixed length using ffmpeg

I want to use an ffmpeg command to save segments of length 10 seconds from a video being streamed via HTTP using VLC.
The command I used to carry out the task is:
ffmpeg -i http://[Local IPv4 address]:8080 -f segment -segment_time 10 -vcodec copy -acodec copy -reset_timestamps 1 video%03d.mp4
However, the lengths of the output videos I'm receiving are around 8.333 or 16.666 seconds in duration. (The videos video000.mp4, video005.mp4, video010.mp4, video015.mp4... have duration of around 16.666 seconds and the remaining videos have duration of around 8.333 seconds).
I'm aware that the segmentation of input video happens based on the occurrences of keyframes in the video. It appears that the key frames in the video being streamed occur with an interval of around 8 seconds.
Is there a way to obtain video segments that are closer to 10 seconds in duration from the live stream of such a video?
Also, I occasionally get the "Non-monotonous DTS in output stream 0:0" warning while executing the above command. I tried using different flags (+genpts, +igndts, +ignidx) hoping that the warning message would not be displayed, but no luck. Is it possible that there is any correlation between this warning and the issue with lengths of the segments?

How to specify the exact number of output image frames with ffmpeg?

I have N input animation frames as images in a folder and I want to create interpolated inbetween frames to create a smoother animation of length N * M, i.e. for every input frame I want to create M output frames that gradually morph to the next frame, e.g. with the minterpolate filter.
In other words, I want to increase the FPS M times, but I am not working with time as I am not working with any video formats, both input and output are image sequences stored as image files.
I was trying to combine the -r and FPS options, but without success as I don't know how they work together. For example:
I have 12 input frames.
I want to use the minterpolate filter to achieve 120 frames.
I use the command ffmpeg -i frames/f%04d.png -vf "fps=10, minterpolate" -r 100 interpolated_frames/f%04d.png
The result I get is 31 output frames.
Is there a specific combination of -r and FPS I should use? Or is there another way I can achieve what I need?
Thank you!
FFmpeg assigns a framerate of 25 to formats which don't have an inherent frame rate, like image sequences.
The image sequence demuxer has an option to set a framerate. And the minterpolate filter has an option for target fps.
ffmpeg -framerate 12 -i frames/f%04d.png -vf "minterpolate=fps=120" interpolated_frames/f%04d.png

Filter useless white frames at the beginning and duplicated frames at the end of captured video

I'm capturing HTML animation with use of Puppeteer and MediaRecorder API.
Before starting the capturing I'm waiting for networkidle event (I tried networkidle0-2 but result is identical)
await page.goto(url, { waitUntil: 'networkidle0' })
For some reason, the animation starts to play 2-3 seconds after the capturing starts, and thus, white frames are captures.
Similar, at the end of the video there are identical frames because capturing duration is a bit longer than animation plays.
Thus I want to detect and cut off those repeating white frames at the beginning and repeating non-white frames at the end of the video (mp4/webm).
I tried some solutions, like described here, for instance
ffmpeg -i input.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4
It does remove duplicates, but the problem is that it cuts off dups in the middle as well.
Thus, if I have a 15 sec animation,it removes all dups at the beginning, all dups at the end and all dups in the middle, and what is left after it is just a several identical frames which are pack into less than 1 sec video.

FFmpeg how to extract time fragments of exact duration at set intervals?

I have a video file, I know how to extract segments with ffmpeg and setting the keyframes to do so exact.
However, I would like to extract a segment of a certain duration, say 1 minute, then wait 50 seconds, again segment 1 minute, wait 50 seconds, again segment 1 minute, etc. until the end of the video file.
How can I accomplish this?
Is it possible to use a list.txt as cut input?
Let's call your segment duration X and interval between end of one segment and start of another Y. Both in seconds. Use
ffmpeg -i in.mp4 -vf select='lt(mod(t,X+Y),X)',setpts=N/FRAME_RATE/TB -force_key_frames expr:gte(t,n_forced*X) -f segment -segment_time X out%d.ts
You may want to add -reset_timestamps 1 for zero-start timestamps for each segment. Audio is ignored and will be out of sync if present. Add a corresponding audio filter with aselect/asetpts to cut audio in sync.

In ffmpeg, can I specify time in frames rather than seconds?

I am programatically extracting multiple audio clips from single video files using ffmpeg.
My input data (start and end points) are specified in frames rather than seconds, and the audio clip will be used by a frame-centric user (an animator). So, I'd prefer to work in frames throughout.
In addition, the framerate is 30fps, which means I'd be working in steps of 0.033333 seconds, and I'm not sure it's reasonable to expect ffmpeg to trim correctly given such values.
Is it possible to specify a frame number instead of an ffmpeg time duration for start point (-ss) and duration (-t)? Or are there frame-centric ffmpeg commands that I've missed?
Audio frame or sample numbers don't correspond to video frame numbers, and I don't see a way to specify audio trim points by referencing video frame indices. Nevertheless, see this answer for more details.

Resources