Reporting duplicated frames with FFmpeg - ffmpeg

I am looking for a method to report (not just detect and remove) duplicated frames of video detected by FFmpeg - similar to how you can print out blackdetect, cropdetect, silencedetect, etc.
For example:
ffmpeg -i input.mp4 -vf blackdetect -an -f null - 2>&1 | grep blackdetect > output.txt
Outputs something like:
[blackdetect # 0x7f8032f03680] black_start:5.00501 black_end:7.00701 black_duration:2.002
But there's no "dupedetect" filter as far as I know, so I'm looking for any ideas/workarounds to get a read of where frames are duplicated.

Try -vp mpdecimate in the command line.
ffmpeg -i input.mp4 -vf mpdecimate -loglevel debug -an -f null - 2>&1 | grep 'drop_count:\d' > output.txt
Sample line of output:
[Parsed_mpdecimate_0 # 0x7fbbfa210380] lo:0<2653 lo:0<1326 lo:0<1326 drop pts:101101 pts_time:4.21254 drop_count:1

Related

Extract LUFS only from multiple file

ffmpeg -i input.mp4 -af ebur128=framelog=verbose -f null - 2>&1 | awk '/I:/{print $2}'
The above command extract only LUFS value from input.mp4 file. But If there are number of mp4 files, how to apply similar command to extract LUFS value only from multiple mp4 files?
Please help.
Adapting this answer from How do you convert an entire directory with ffmpeg?
for i in *.mp4; do echo "$i:"; ffmpeg -i "$i" -map 0:a -af ebur128=framelog=verbose -f null - 2>&1 | awk '/I:/{print $2}'; done
Example output:
video1.mp4:
-21.8
video2.mp4:
-21.1
video3.mp4:
-21.8
-8.3
Note that video3.mp4 contains 2 separate audio streams.
This is assuming you can use Bash shell.
-map 0:a was added to only process the audio so the video is ignored and therefore the command is faster. See FFmpeg Wiki: Map.

FFMPEG is trim and -t ignored for the PSNR filter?

I wanted to run a PSNR check on a encoded segment but avoid extracting the segment in a lossless codec first for comparsion. I just wanted to trim the input, however it looks like this is disabled.
My command:
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
This will run through the whole original input file and not trim the video in the filter.
If I try to trim the input with -ss and -t, only the input -ss flag is working. It will set the input correct but ignore the -t timestamp.
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4 -filter_complex [0:v][1:v]psnr -f null -
Different placement of the -t will have no effect.
I also tried to set the duration in trim while keeping the -ss input which is working.
ffmpeg -ss 10 -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=duration=10,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
I did try this with end and end_frame but neither one worked.
The same applies if I use -lavfi instead of -filter_complex.
I did have a brief look at the sourcecode of the PSNR filter but could not find any refrences to trim or -t.
Is this function blocked or am I doing something wrong?
Would there be an alternative way to doing this without encoding a lossless version of the same segment to compare?
The original command is almost fine. However, the order of inputs should be swapped, and if there's any audio, that should be disabled.
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[1v][0v]psnr" -an -f null -
Also, in the snippet below
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4
if you meant to limit the duration of original.mp4, then -t 10 should be placed before -i original.mp4.

ffmpeg silencedetect -ss argument

I need to detect silence in a part of a sound file
ffmpeg -i 3.mp3 -ss 00:22:00 -to 00:23:30 -af "silencedetect=noise=-18dB:d=0.15,ametadata=mode=print:file=vol.txt" -f null -
This code detects silence from the beginning of the file to the value of -to.
Is it possible to print only lavfi.silence_end and lavfi.silence_start? How to pass it to
"silencedetect=noise=-18dB:d=0.15,ametadata=mode=print:file=vol.txt"
Use the select filter instead:
ffmpeg -i 3.mp3 -af "aselect='between(t,1320,1410)',silencedetect=noise=-18dB:d=0.15,ametadata=mode=print:file=vol.txt" -f null -

how to get the timestamp of the image extracted using ffmpeg

How to get the time stamp of the extracted image obtained by using ffmpeg? What option is to be passed to the ffmpeg command?
The current command that i am using is:
ffmpeg -i video.mp4 -vf select='gt(scene\,0.3)' -vsync 0 -an keyframes%03d.jpg
Extracting frame while scene change and get time for particular frame . May following line might helps:
ffmpeg -i image.mp4 -filter:v "select='gt(scene,0.1)',showinfo" -vsync 0 frames%05d.jpg >& output.txt
You will get output like: [Parsed_showinfo_1 # 0x25bf900] n: 0 pts: 119357 pts_time:9.95637 pos: 676702..... You need to extract pts_time for that run following command.
grep showinfo ffout | grep pts_time:[0-9.]* -o | grep '[0-9]*\.[0-9]*' -o > timestamps
Using above command you will find following:
9.95637
9.98974
15.0281
21.8016
28.208
28.4082
An option could be to write timestamps directly over each frame, using drawtext video filter.
On a Windows machine, using Zeranoe ffmpeg package you can type:
ffmpeg -i video.mp4 -vf "drawtext=fontfile=/Windows/Fonts/Arial.ttf: timecode='00\:00\:00\:00': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1: fontsize=30" tmp/frame%05d.jpg" -vsync 0 -an keyframes%03d.jpg
The command will dump frame timestamps in the lower part of each frame, with seconds resolution.
Please have a look here to get informations how to setup fonts enviroment variables for ffmpeg.
As this question is tagged in python, I would like to add the following solution for Python developers:
from subprocess import check_output
import re
pts = str(check_output('ffmpeg -i video.mp4 -vf select="eq(pict_type\,I)" -an -vsync 0 keyframes%03d.jpg -loglevel debug 2>&1 |findstr select:1 ',shell=True),'utf-8') #replace findstr with grep on Linux
pts = [float(i) for i in re.findall(r"\bpts:(\d+\.\d)", pts)] # Find pattern that starts with "pts:"
print(pts)
A small update to #DeWil's answer (sorry could not comment as I do not have enough reputations).
The below answer gives the appropriate timestamp by matching with the regular expresion for timestamp "t:" not "pts:".
from subprocess import check_output
import re
pts = str(check_output('ffmpeg -i input.mp4 -vf select="eq(pict_type\,I)" -an -vsync 0 keyframes%03d.jpg -loglevel debug 2>&1 |findstr select:1 ',shell=True),'utf-8') #replace findstr with grep on Linux
pts = [float(i) for i in re.findall(r"\bt:(\d+\.\d)", pts)] # Find pattern that starts with "t:"
print(pts)

Dump last frame of video file using ffmpeg/mencoder/transcode et. al

I'd like to grab the last frame in a video (.mpg, .avi, whatever) and dump it into an image file (.jpg, .png, whatever). Toolchain is a modern Linux command-line, so things like mencoder, transcode, ffmpeg &c.
Cheers,
Bob.
This isn't a complete solution, but it'll point you along the right path.
Use ffprobe -show_streams IN.AVI to get the number of frames in the video input. Then
ffmpeg -i IN.AVI -vf "select='eq(n,LAST_FRAME_INDEX)'" -vframes 1 LAST_FRAME.PNG
where LAST_FRAME_INDEX is the number of frames less one (frames are zero-indexed), will output the last frame.
I have a mp4 / h264 matroska input video file. And none of the above solutions worked for me. (Although I sure they work for other file formats).
Combining samrad's answer above and also this great answer and came up with this working code:
input_fn='output.mp4'
image_fn='output.png'
rm -f $image_fn
frame_count=`ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $input_fn`
ffmpeg -i $input_fn -vf "select='eq(n,$frame_count-1)'" -vframes 1 "$image_fn" 2> /dev/null
I couldn't get Nelson's solution to work. This worked for me.
https://gist.github.com/samelie/32ecbdd99e07b9d8806f
EDIT (just in case the link disappears, here is the shellscript—bobbogo):
#!/bin/bash
fn="$1"
of=`echo $1 | sed s/mp4/jpg/`
lf=`ffprobe -show_streams "$fn" 2> /dev/null | grep nb_frames | head -1 | cut -d \= -f 2`
rm -f "$of"
let "lf = $lf - 1"
ffmpeg -i $fn -vf select=\'eq\(n,$lf\) -vframes 1 $of
One thing I have not seen mentioned is that the expected frame count can be off if the file contains dupes. If your method of counting frames is causing your image extraction command to come back empty, this might be what is mucking it up.
I have developed a short script to work around this problem. It is posted here.
I found that I had to add -vsycn 2 to get it to work reliably. Here's the full command I use:
ffmpeg -y -sseof -3 -i $file -vsync 2 -update 1 -vf scale=640:480 -q:v 1 /tmp/test.jpg
If you write to a NFS folder then the results will be very inconsistent. So I write to /tmp then copy the file once done. If it is not done, I do a kill -9 on the process ID.

Resources