How to set a video's duration in FFMPEG? - ffmpeg

How can I limit the video duration for a given video? For example, if we are uploading one video that should not exceed more than 5 minutes, I need a command in FFMPEG.

Use the -t option to specify a time limit:
`-t duration'
Restrict the transcoded/captured video sequence to the duration specified in seconds. hh:mm:ss[.xxx] syntax is also supported.
http://www.ffmpeg.org/ffmpeg.html

Just to elaborate a bit further for more detailed use and examples.
As Specified in the FFMpeg Docs
-t duration (input/output)
When used as an input option (before -i),
limit the duration of data read from the input file.
e.g. ffmpeg -t 5 -i input.mp3 testAsInput.mp3
Will stop writing automatically after 5 seconds
When used as an output option (before an output url),
stop writing the output after its duration reaches duration.
e.g. ffmpeg -i input.mp3 -t 5 testAsOutput.mp3
Will stop writing automatically after 5 seconds
Effectively, in this use case the result is the same. See below for a more extended use case.
-to position (input/output)
Stop writing the output or reading the input at position.
e.g. same as above but with to instead of t
duration or positionmust be a time duration specification, as specified in the ffmpeg-utils(1) manual.
[-][HH:]MM:SS[.m...] or [-]S+[.m...][s|ms|us]
-to and -t are mutually exclusive and -t has priority.
Example use as input option with multiple inputs
Note: -f pulse -i 1 is my system audio , -f pulse -i 2 is my micrphone input
Lets imagine I want to record both my microphone and speakers at the same time indefinetly.(until I force a stop with Ctrl+C)
I could use the amix filter for example.
ffmpeg \
-f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2" \
testmix.mp3
Now lets imagine I only want to record the first 5 seconds of my system audio and always my microphone, again, until I kill the process with Ctrl+C).
ffmpeg \
-t 5 -f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
testmix.mp3
Note: :duration=longest amix option is the default anyway, so not really needed to specify explicitly
Now lets assume I want the same as above but limit the recording to 10 seconds. The following examples would satisfy that requirement:
ffmpeg \
-t 5 -f pulse -i 1 \
-t 10 -f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
testmix.mp3
ffmpeg \
-t 5 -f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
-t 10 testmix.mp3
Note: With regards to start position searching/seeking this answer with a bit of investigation I did, may also be of interest.

An example;
ffmpeg -f lavfi -i color=s=1920x1080 -loop 1 -i "input.png" -filter_complex "[1:v]scale=1920:-2[fg]; [0:v][fg]overlay=y=-'t*h*0.02'[v]" -map "[v]" -t 00:00:03 output.mp4
This sets the max time to 3 seconds. Note that the -t has to be just before the output file, if you set it at the start of this command, i.e. ffmpeg -t .... it will NOT work.

Related

FFMPEG trimming video (using select between) doesn't affect overall duration

Completely new to working with FFMPEG, what I'm trying to achieve is applying overlaying graphics at certain positions and times, and cutting out sections of a single input video.
I've worked out the overlaying graphics, so this code is working:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0];
[ov0] overlay=192:108:enable='between(t, 5, 10)'" \
-pix_fmt yuv420p output_overlayed.mp4
But when I try to cut out sections using this code:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0]; \
[ov0] overlay=192:108:enable='between(t, 5, 10)', \
select='between(t,0,5)+between(t,10,15)', \
setpts='N/FRAME_RATE/TB'" \
-pix_fmt yuv420p output_overlayed_trimmed.mp4
It seems to cut correctly, so the original video starts playing from 0 seconds until 5 seconds and then plays from 10 seconds in until 15 seconds and cuts out. But after the point where the video cuts out it's just a black screen for the duration of the video. I can't seem to get it to work so it affects the overall duration of the video.
(The values being passed in are just examples by the way, eg. I've got it to start an overlay 5 seconds in but also cut 5 seconds in)
I have the timestamps for when the overlays should appear on the non-trimmed video, so the overlaying should happen first and then the trimming. If the video is trimmed first then the overlays will appear at the wrong times.
An alternative way of achieving this that is currently working is by performing the first line of code (which just produces a new video file with the overlay) and then separately take this new file and perform the trimming independently:
ffmpeg -ss 0 -to 5 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-ss 15 -to 20 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-filter_complex "[0][1]concat=n=2:v=1:a=1" output_trimmed.mp4
But this means working with 2 separate files and then having to remove the first after the 2nd execution is complete. Ideally I'd combine them into one command which doesn't produce multiple files.
Would appreciate any help - thanks!
How about get the input twice with different trims (so both video & audio are cut in sync) then concatenate after overlaying? Like this:
ffmpeg -t 5 -i /Users/username/projectdir/static/video.mp4 \
-ss 10 -to 15 -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][2:v] overlay=192:108[ov0]; \
[1:v][3:v] overlay=192:108[ov1]; \
[ov0][0:a][ov1][1:a] concat=n=2:v=2:a=2[vout][aout] \
-map [vout] -map[aout] -pix_fmt yuv420p output_overlayed_trimmed.mp4

Concatenating video clip with static image causes buffer errors

I'm trying to concatenate a 15 second clip of a video (MOVIE.mp4) with 5 seconds (no audio) of an image (IMAGE.jpg) using FFmpeg.
Something seems to be wrong with my filtergraph, although I'm unable to determine what. The command I've put together is the following:
ffmpeg \
-loop 1 -t 5 -I IMAGE.jpg \
-t 15 -I MOVIE.mp4 \
-filter_complex "[0:v]scale=480:640[1_v];anullsrc[1_a];[1:v][1:a][1_v][1_a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Unfortunately, this seems to be creating some strange results:
On my personal computer (FFmpeg 4.2.1) it correctly concatenates the movie with the static image; however, the static image lasts for an unbounded length of time. (After entering ctrl-C, the movie is still viewable, but is of an extremely long length--e.g., 35 min--depending on when I interrupt the process.)
On a remote machine where I need to do the ultimate video processing (FFmpeg 2.8.15-0ubuntu0.16.04.1), the command does not terminate, and instead, I get cascading errors of the following form:
Past duration 0.611458 too large
...
[output stream 0:0 # 0x21135a0] 100 buffers queued in output stream 0:0, something may be wrong.
...
[output stream 0:0 # 0x21135a0] 100000 buffers queued in output stream 0:0, something may be wrong.
I haven't been able to find much documentation that elucidates what these errors mean, so I don't know what's going wrong.
As Gyan pointed out, you only have to add atrim to your audio:
anullsrc,atrim=0:5[silent-audio]
Instead of scale you could use scale2ref and setsar to automatically make your image the same size and aspect ratio as the video.
ffmpeg \
-loop 1 -t 5 -i IMAGE.jpg \
-t 15 -i MOVIE.mp4 \
-filter_complex "[0:v][1:v]scale2ref[img][v];[img]setsar=1[img]; \
anullsrc,atrim=0:5[silent-audio];[v][1:a][img]
[silent-audio]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Alternatively you could use anullsrc as a 3rd input:
ffmpeg \
-t 15 -i MOVIE.mp4 \
-loop 1 -t 5 -i IMAGE.jpg \
-f lavfi -t 5 -i anullsrc \
-filter_complex "[1:v][0:v]scale2ref[img][v];\
[img]setsar=1[img];[v][0:a][img][2:a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4

FFMPEG is trim and -t ignored for the PSNR filter?

I wanted to run a PSNR check on a encoded segment but avoid extracting the segment in a lossless codec first for comparsion. I just wanted to trim the input, however it looks like this is disabled.
My command:
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
This will run through the whole original input file and not trim the video in the filter.
If I try to trim the input with -ss and -t, only the input -ss flag is working. It will set the input correct but ignore the -t timestamp.
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4 -filter_complex [0:v][1:v]psnr -f null -
Different placement of the -t will have no effect.
I also tried to set the duration in trim while keeping the -ss input which is working.
ffmpeg -ss 10 -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=duration=10,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
I did try this with end and end_frame but neither one worked.
The same applies if I use -lavfi instead of -filter_complex.
I did have a brief look at the sourcecode of the PSNR filter but could not find any refrences to trim or -t.
Is this function blocked or am I doing something wrong?
Would there be an alternative way to doing this without encoding a lossless version of the same segment to compare?
The original command is almost fine. However, the order of inputs should be swapped, and if there's any audio, that should be disabled.
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[1v][0v]psnr" -an -f null -
Also, in the snippet below
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4
if you meant to limit the duration of original.mp4, then -t 10 should be placed before -i original.mp4.

Differences in outcome between positioning of ss and t?

What is the difference between
ffmpeg -y -ss 1 -i "test.MP4" -t 2 -c copy "test2.MP4"
and
ffmpeg -y -t 2 -i "test.MP4" -ss 1 -c copy "test2.MP4"
and
ffmpeg -y -ss 1 -t 2 -i "test.MP4" -c copy "test2.MP4"
and
ffmpeg -y -i "test.MP4" -ss 1 -t 2 -c copy "test2.MP4"
From the documentation on https://ffmpeg.org/ffmpeg.html:
-ss position (input/output)
When used as an input option (before -i), seeks in this input file to
position. [...] When used as an output option (before an output url),
decodes but discards input until the timestamps reach position.
and
-t duration (input/output)
When used as an input option (before -i), limit the duration of data
read from the input file. When used as an output option (before an
output url), stop writing the output after its duration reaches
duration.

Fast seeking ffmpeg multiple times for screenshots

I have come across https://askubuntu.com/questions/377579/ffmpeg-output-screenshot-gallery/377630#377630, it's perfect. That has done exactly what I wanted.
However, I'm using remote URLs to generate the screenshot timeline. I do know it's possible to fast seek with remote files using https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg (using -ss before the -i) but this only runs the once.
I'm looking for a way to use the
./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile,scale=600:-1" \
-frames:v 1 -qscale:v 3 preview.jpg
command but using the fast seek method as it's currently very slow when used with a remote file. I use PHP but I am aware that a C method exists by using av_seek_frame, I barely know C so I'm unable to implement this into a PHP script I'm writing. So hopefully, it is possible to do this directly with ffmpeg in the PHP system() function.
Currently, I run seperate ffmpeg commands (with the -ss method) and then combine the screenshots together in PHP. However, with this method it will be refetching the metadata each time and a more optimized method would be to have it all happen in the same command line because I want to reduce the amount of requests made to the remote url so I can run more scripts in sequence with each other.
Thank you for your help.
Yes it's because -ss is not before -i and you need to add that before each input.
So here's a working example that takes it out super fast.
ffmpeg -ss 10 -i test.avi -frames:v 1 -f image2 -map 0:v:0 thumbnails/output_0.png \
-ss 800 -i test.avi -frames:v 1 -f image2 -map 1:v:0 thumbnails/output_1.png \
-ss 2400 -i test.avi -frames:v 1 -f image2 -map 2:v:0 thumbnails/output_2.png
So the 0 : v : 0 means 1st input, and select video streams, first videostream 1 : v : 0 means 2nd input, and select video streams, first videostream (0) 2 : v : 0 means 2nd input, and select video streams, first videostream (0)
The main reason why this is slow is because "select=gt(scene\,0.4)" requires every frame to be decoded and compared to the next so that scene changes can be detected.
I don't believe it is possible to do what you are doing any faster than you are doing with the scene change detector. You could provide n screenshot from video_duration/n steps through the video, additionally you could also check each frame isn't black by checking the image intensity is above a threshold.
$ffmpeg = "ffmpeg.exe";
$cmd = "$ffmpeg -ss 20 -i $Filename -frames:v 1 mjpeg -map 0:v:0 $Thumbnail";
$Return = `$cmd`;
Makes extremely fast thumbnails of videos. $Filename is the file and path of your video e.g. C:\videos\video_1.mp4
And $Thumbnail is the file path AND filename to where you want your thumbnail stored e.g. C:\Thumbnails\Thumbnail_1.jpg

Resources