ffmpeg command to extract first two minutes of a video - ffmpeg

What would be the simplest ffmpeg command to truncate the video to the first two minutes (or do nothing if the video is less than two minutes) ? It can do a pass-through on any of the initial video settings.

This is an adapted version of David542's post since my edit was rejected as "completely superfluous or actively harm[ing] readability".
For extractions, it is essential to add the -c copy flag,
ffmpeg -i in.mov -ss 0 -t 120 -c copy out.mov
what is shorthand for -vcodec copy -acodec copy.
Otherwise ffmpeg reencodes the selection what is about 1000 times slower on my machine here and possibly alters the outcome in quality due to default settings taken.
Edit
As ack-inc indicated, it makes a difference whether -ss is placed before (input option) or behind (output option) the -i.
In conjunction with stream copying (-c copy), ffmpeg will seek to the closest seek point before the position (e.g. the closest keyframe), if used as an input option. This is very fast but may be inaccurate depending on the codec and codec settings used.
If used as an output option, ffmpeg first decodes the complete stream up to the position. This uses more CPU resources and therefore takes longer, but the cut is now precise.
The latter takes about 16 times longer on my computer, but in the end it only takes about 1.5 seconds to search through 1.5 hours of FHD stream (conjunction with -c copy). In other words, computers are now sufficiently fast that this is the way of choice for the vast majority of end users.
Without stream copying -- when trans-coding -- the difference was a factor of about 500 (or 6 min at 100 % CPU).

$ ffmpeg -i in.mov -ss 0 -t 120 out.mov

Related

Can I trigger a command after ffmpeg writes a segment?

I am reading from an RTSP (camera) stream and writing segments, using ffmpeg. My command to do so is:
ffmpeg -rtsp_transport tcp -i rtsp://$camera_creds#$camera_ip/video/1 -map 0 -c:v h264 -preset:v ultrafast -reset_timestamps 1 -f segment -segment_time 300 -strftime 1
-segment_list ${monitor_dir}/segments$camera.txt $monitor_dir/cam${camera}_out%Y%m%d_%H%M%S.mp4
It works fine to a point. My problem is that I want to do something with each segment once it has been written.
To accomplish this, I monitor the segmentsN.txt file for lines being added to it; I then read the contents, do stuff (process, upload), and then remove the lines that I've already processed (so that I don't reprocess them).
The problem with this is that periodically, ffmpeg will start writing to a new segment, but apparently won't update the segments list file. Initially I thought this was because my "remove the lines" operation was writing a brand new file and replacing it in place (which it was), while ffmpeg was probably continuing to append to the inode it started out with (which it maybe was). Having fixed that, I think I now just have a race condition.
What I'd really like is to have ffmpeg change the filename once it has completed a segment, and/or move the completed segment file into a different folder. However, this doesn't seem like an option. Is there a more robust way to accomplish what I'm doing here? I could just poll for multiple files with a given filename pattern, and process the earliest, circumventing the segments list file...but something more robust would be nice.
Thanks
At loglevel 40 / verbose or higher, you can grep ffmpeg's log for the word ended and see lines like these:
[segment # 000002d338781120] segment:'Mymon/Cam1_out20221228_093412.mp4' count:23 ended
Get the log by reading stderr or by adding -report to generate a logfile.

Remove a section from the middle of a video without concat

How do I cut a section out of a video with ffmpeg?
Imagine I have a 60 second mp4 A.
I want to remove all the stuff from 0:15 to 0:45.
The result should be a 30-second mp4, which is composed of the first 15 seconds of A directly followed by the last 15 seconds of A.
How can I do this without using concat?
I know how I could do it by creating two intermediary files and then using ffmpeg to concat them. I don't want to have to perform so much manual work for this (simple?) operation.
I have also seen the trim filder used for removing multiple parts from a video. All the usages I've found show that it seems to be very verbose, and I haven't found an example for a case as simple as I would like (just a single section removed).
Do I have to use trim for this operation? Or are there other less verbose solutions?
The ideal would of course be something at least simple as -ss 0:15 -to 0:45 which removes the ends of a video (-cut 0:15-0:45 for example).
I started from
https://stackoverflow.com/a/54192662/3499840 (currently the only answer to "FFmpeg remove 2 sec from middle of video and concat the parts. Single line solution").
Working from that example, the following works for me:
# In order to keep <start-15s> and <45s-end>, you need to
# keep all the frames which are "not between 15s and 45s":
ffmpeg -i input.mp4 \
-vf "select='not(between(t,15,45))', setpts=N/FRAME_RATE/TB" \
-af "aselect='not(between(t,15,45))', asetpts=N/SR/TB" \
output.mp4
This is a one-line linux command, but I've used the bash line-continuation character ('\') so that I can vertically align the equals-signs as this helps me to understand what is going on.
I had never seen ffmpeg's not and between operators before, but I found their documentation here.
Regarding the usual ffmpeg "copy vs re-encode" dichotomy, I was hoping to be able to use ffmpeg's "copy" "codec" (yeah, I know that it's not really a codec) so that ffmpeg would not re-encode my video, but if I specify "copy", then ffmpeg starts and stops at the nearest keyframes which are not sufficiently close to my desired start and stop points. (I want to remove a piece of video that is approximately 20 seconds long, but my source video only has one keyframe every 45 seconds!). Hence I am obliged to re-encode. See https://trac.ffmpeg.org/wiki/Seeking#Seekingwhiledoingacodeccopy for more info.
The setpts/asetpts filters set the timestamps on each frame to the correct values so that your media player will play each frame at the correct time.
HTH.
If you want to use the copy "codec", consider the following approach:
ffmpeg -i input.mp4 -t "$start_cut_section" -c copy part1.mp4&
ffmpeg -i input.mp4 -ss "$end_cut_section" -c copy part2.mp4&
echo "file 'part1.mp4'" > filelist;
echo "file 'part2.mp4'" >> filelist;
wait;
ffmpeg -f concat -i filelist -c copy output.mp4;
rm filelist;
This creates two files from before and after the cut, then combines them into a new trimmed final video. Obviously, this can be used to create as many cuts as you like. It may seem like a longer approach than the accepted answer, but it likely will execute much faster because of the use of the copy codec.

FFMpeg Copy Live Stream (Limit to 60s file)

I currently have a working way to get a live stream and start downloading it locally while it is still live.
ffmpeg -i source_hls.m3u8 -c copy output.mkv -y
The problem is I do not actually want to save the entire thing, I just periodically run another command on the output.mkv command to create a clip of part of the live stream.
I was wondering if it was possible to limit the output.mkv file to be only 60s long so once the stream goes over 1 minute it will just cut off the old video and be replaced by the new rolling video.
Is this possible or no?
You can come close, using the segment muxer.
ffmpeg -i source_hls.m3u8 -c copy -f segment -segment_time 60 -segment_wrap 2 -reset_timestamps 1 out%02d.mkv -y
This will write to out00.mkv, then out01.mkv, then overwrite out00.mkv, next overwrite out01.mkv and so on.
The segment time is set at 60 seconds, so each segment will be around 60 seconds. The targets for splitting are 60,120,180,240... seconds of the input. However, video streams will be only be split at keyframes at or after the split target. So, if the first keyframe after t=59 is at 66, then the first segment will be 66s long. The next target is 120s. Let's say there's a KF at 121s, so the 2nd segment will be 66 to 121s = 55s long. Something to keep in mind when checking the segments.
Check the file modification times to see which segment contains the earlier data.
If you want to reduce the surplus duration, decrease segment_time and increase segment_wrap correspondingly. segment_time x segment_wrap should be target saved duration + segment_time long.
Late answer, but you can use -t duration, i.e.:
ffmpeg -y -t 60 -i source_hls.m3u8 -c copy output.mkv
From ffmpeg docs:
-t duration (input/output)
When used as an input option (before -i), limit the duration of data
read from the input file.
When used as an output option (before an output url), stop writing the
output after its duration reaches duration.
duration must be a time duration specification, see (ffmpeg-utils)the
Time duration section in the ffmpeg-utils(1)
manual.
-to and -t are mutually exclusive and -t has priority.
-t argument examples:
11 - 11 seconds
11.111 - 11.111 seconds
1:11:11 - 11 hours, 11 minutes and 11 seconds

How to accurately match an image to a video frame and exit with ffmpeg

In Bash,
I am trying to match an image to a frame in ffmpeg. I also want to exit the ffmpeg process when the match is found. Here is a (simplified version) of the code currently:
ffmpeg --hide_banner -ss 0 -to 60 \
-i "video.mp4" -i "image.jpg" -filter_complex \
"blend=difference, blackframe" -f null - </dev/null 2>log.txt &
pid=$!
trap "kill $pid 2>/dev/null" EXIT
while kill -0 $pid 2>/dev/null; do
# (grep command to monitor log file)
# if grep finds blackframe match, return blackframe time
done
To my understanding, if the video actually contains a blackframe I will get a false-positive. How can I effectively mitigate this?
While this is unnecessary to answer the question, I would like to exit the ffmpeg process without having to use grep to constantly monitor the log file, instead using pure ffmpeg
Edit: I say this because while I understand the blend filter is computing the difference, I am getting a false positive on a blackframe in my video and I don't know why.
Edit: A possible solution to this issue is to not use blackframe at all, but psnr (Peak Signal to Noise Ratio) but normal usage is by comparing two videos frame by frame, and I don't know how to effectively use it with an image as input.
Use
ffmpeg -ss 0 -t 60 -copyts -i video.mp4 -i image.jpg -filter_complex "[0]extractplanes=y[v];[1]extractplanes=y[i];[v][i]blend=difference,blackframe=0,metadata=select:key=lavfi.blackframe.pblack:value=100:function=equal,trim=duration=0.0001,metadata=print:file=-" -an -v 0 -vsync 0 -f null -
If a match is found, it will print to stdout a line of the form,
frame:179 pts:2316800 pts_time:6.03333
lavfi.blackframe.pblack=100
else no lines will be printed. It will exit after the first match, if found, or till whole input is processed.
Since blackframe only looks at luma, I use extractplanes both to speed up blend and also avoid any unexpected format conversions blend may request.
blackframe threshold is set to 0, so all frames have the blackframe value metadata tagged. False positives are not possible since blend computes the difference. The difference between a black input frame and the reference frame is equal to the reference frame, unless the reference is a black frame, in which case, it is not a false positive.
The first metadata filter only passes through frames with blackframe value of 100. The trim filter stops a 2nd frame from passing through (except if your video's fps is greater than 10000). The 2nd metadata filter prints the selected frame's metadata.

ffmpeg - Converting series of images to video (Ubuntu)

I got pictures named as
pic_0_new.jpg
pic_10_new.jpg
pic_20_new.jpg
...
pic_1050_new.jpg
which I want to turn into a video (Ubuntu ffmpeg). I tried the following
ffmpeg -start_number 0 -i pic_%d_new.jpg -vcodec mpeg4 test.avi
but I don't know how to set the step size and the end number. How to do this?
Thanks for help :)
If your files are named with leading zeroes then you can use the built-in globbing functionality. If not (like your's), you can create a file list and supply that as the input, just like explained here.
The other thing you need to set is the input framerate that tells FFmpeg what framerate should assume for the images (note that the option is ahead of the -i input option).
So the command should look like this:
ffmpeg -framerate 25 -i pic_%04d_new.jpg <encoding_settings> test.avi
Also note that you can use the filename expansion on files with or without leading zeroes (Thanks for #Gyan to pointing it out):
match regularly numbered files: %d (this is what you're using)
img_%d.png // This will match files with any number as a postfix starting from img_0.png
match leading zeroes: %0<number_of_digits>d
img_%03d.png // This will match files ranging from img_000.png to img_999.png
In addition, mpeg4/avi is not the most convenient encoder/container to use...

Resources