how to get the timestamp of the image extracted using ffmpeg - ffmpeg

How to get the time stamp of the extracted image obtained by using ffmpeg? What option is to be passed to the ffmpeg command?
The current command that i am using is:
ffmpeg -i video.mp4 -vf select='gt(scene\,0.3)' -vsync 0 -an keyframes%03d.jpg

Extracting frame while scene change and get time for particular frame . May following line might helps:
ffmpeg -i image.mp4 -filter:v "select='gt(scene,0.1)',showinfo" -vsync 0 frames%05d.jpg >& output.txt
You will get output like: [Parsed_showinfo_1 # 0x25bf900] n: 0 pts: 119357 pts_time:9.95637 pos: 676702..... You need to extract pts_time for that run following command.
grep showinfo ffout | grep pts_time:[0-9.]* -o | grep '[0-9]*\.[0-9]*' -o > timestamps
Using above command you will find following:
9.95637
9.98974
15.0281
21.8016
28.208
28.4082

An option could be to write timestamps directly over each frame, using drawtext video filter.
On a Windows machine, using Zeranoe ffmpeg package you can type:
ffmpeg -i video.mp4 -vf "drawtext=fontfile=/Windows/Fonts/Arial.ttf: timecode='00\:00\:00\:00': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1: fontsize=30" tmp/frame%05d.jpg" -vsync 0 -an keyframes%03d.jpg
The command will dump frame timestamps in the lower part of each frame, with seconds resolution.
Please have a look here to get informations how to setup fonts enviroment variables for ffmpeg.

As this question is tagged in python, I would like to add the following solution for Python developers:
from subprocess import check_output
import re
pts = str(check_output('ffmpeg -i video.mp4 -vf select="eq(pict_type\,I)" -an -vsync 0 keyframes%03d.jpg -loglevel debug 2>&1 |findstr select:1 ',shell=True),'utf-8') #replace findstr with grep on Linux
pts = [float(i) for i in re.findall(r"\bpts:(\d+\.\d)", pts)] # Find pattern that starts with "pts:"
print(pts)

A small update to #DeWil's answer (sorry could not comment as I do not have enough reputations).
The below answer gives the appropriate timestamp by matching with the regular expresion for timestamp "t:" not "pts:".
from subprocess import check_output
import re
pts = str(check_output('ffmpeg -i input.mp4 -vf select="eq(pict_type\,I)" -an -vsync 0 keyframes%03d.jpg -loglevel debug 2>&1 |findstr select:1 ',shell=True),'utf-8') #replace findstr with grep on Linux
pts = [float(i) for i in re.findall(r"\bt:(\d+\.\d)", pts)] # Find pattern that starts with "t:"
print(pts)

Related

Calculate VMAF score while encoding a video with FFmpeg

I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.

ffmpeg strftime no effect on windows

I'm trying to auto mark the output file with timestamps with ffmpeg. Here's my test cmd:
.\ffmpeg.exe -y -loglevel 99 -i test.mp3 -strftime 1 %Y.ogg
I expected a file named 2020.ogg, however only got %Y.ogg. In another word, output filename is not processed by strftime(). I'm using powershell so there should be no relation with cmd's %% escaping.
Here's the output: https://pastebin.com/LUVh2kFA I'm using static builds from https://ffmpeg.zeranoe.com (Thanks Zeranoe!) I confirmed that the problem exists in v4.2.2 and git-20200515. Is there any chance to fix it or am I doing it wrong?
-strftime is not a general option and is only supported by certain muxers. A workaround is to use the segment muxer and to provide a -segment_time longer than the expected output duration:
ffmpeg -i input.mp3 -f segment -strftime 1 -segment_time 10:00:00 %Y.ogg

FFMPEG is trim and -t ignored for the PSNR filter?

I wanted to run a PSNR check on a encoded segment but avoid extracting the segment in a lossless codec first for comparsion. I just wanted to trim the input, however it looks like this is disabled.
My command:
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
This will run through the whole original input file and not trim the video in the filter.
If I try to trim the input with -ss and -t, only the input -ss flag is working. It will set the input correct but ignore the -t timestamp.
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4 -filter_complex [0:v][1:v]psnr -f null -
Different placement of the -t will have no effect.
I also tried to set the duration in trim while keeping the -ss input which is working.
ffmpeg -ss 10 -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=duration=10,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[0v][1v]psnr" -f null -
I did try this with end and end_frame but neither one worked.
The same applies if I use -lavfi instead of -filter_complex.
I did have a brief look at the sourcecode of the PSNR filter but could not find any refrences to trim or -t.
Is this function blocked or am I doing something wrong?
Would there be an alternative way to doing this without encoding a lossless version of the same segment to compare?
The original command is almost fine. However, the order of inputs should be swapped, and if there's any audio, that should be disabled.
ffmpeg -i original.mp4 -i segment.mp4 -filter_complex "[0:v]trim=10:20,setpts=PTS-STARTPTS[0v];[1:v]setpts=PTS-STARTPTS[1v];[1v][0v]psnr" -an -f null -
Also, in the snippet below
ffmpeg -ss 10 -i original.mp4 -t 10 -i segment.mp4
if you meant to limit the duration of original.mp4, then -t 10 should be placed before -i original.mp4.

Can not add images after add background to audio

I use below command to concat background to an audio file:
"ffmpeg" -i /path/to/image.png -i /path/to/audio.mp3 -vsync vfr -pix_fmt yuv420p /path/to/video-1.mp4 2>&1
Next, I use that command to add other images to created video:
"ffmpeg" -f concat -safe 0 -i /path/to/text.txt -i /path/to/video-1.mp4 /path/to/video-2.mp4 2>&1
Content of my text.txt file:
/path/to/img1.jpg
duration 6
/path/to/img2.jpg
duration 6
/path/to/img3.jpg
duration 6
/path/to/img4.jpg
duration 6
Obtained video display only background image. Other images have not been shown.
What are my wrong in these commands? And how to set the display position of other images in video? To display images by position, I use below command, but after it ran, my machine was crashed:
"ffmpeg" -f concat -safe 0 -i /path/to/text.txt -i /path/to/video-1.mp4 -filter_complex "overlay=0:0, scale=640:640" /path/to/video-2.mp4 2>&1
Update:
I do follow order by below steps and got video with text and images, but the position is incorrect. I can't find any solution for custom:
"ffmpeg" -f concat -safe 0 -i /path/to/text.txt -i /path/to/audio.mp3 -vsync vfr -pix_fmt yuv420p /path/to/tmp-video-1.mp4 2>&1
"ffmpeg" -i /path/to/background.png -i /path/to/tmp-video-1.mp4 -filter_complex "overlay=0:0" /path/to/tmp-video-2.mp4 2>&1
"ffmpeg" -i /path/to/tmp-video-2.mp4 -vf "[in]<define texts and duration>[out]" -codec:a copy /path/to/endvideo.mp4 2>&1
Please take a look my video: http://184.171.170.45/cron/tmp/funny-1489648654.mp4
I tried to set overlay property in second command: -640:0, let images in horizontal center of background, but its not working

encode video in reverse?

Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)
I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.
Does anyone know of a quicker way?
This is an FLV video.
Thank you
No, it isn't possible using ffmpeg to encode a video in reverse without dumping it to images and then back again. There are a number of guides available online to show you how to do it, notably:
http://ubuntuforums.org/showthread.php?t=1353893
and
https://sites.google.com/site/linuxencoding/ffmpeg-tips
The latter of which follows:
Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Dump audio
$ ffmpeg -i input.mkv -vn -ac 2 audio.wav
Reverse audio
$ sox -V audio.wav backwards.wav reverse
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -r *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg.
$ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv
I've created a script for this based on Andrew Stubbs' answer
https://gist.github.com/hfossli/6003302
Can be used like so
./ffmpeg_sox_reverse.sh -i Desktop/input.dv -o test.mp4
New Solution
A much simpler method exists now, simply use the command (adjusting input.mkv and reversed.mkv accordingly):
ffmpeg -i input.mkv -af areverse -vf reverse reversed.mkv
The -af areverse will reverse audio, and -vf reverse will reverse video. The video and audio will be in sync automatically in the output file reversed.mkv, no need to worry about the input frame rate or anything else.
On one video if I only specified the -vf reverse to reverse video (but not audio), the output file didn't play correctly in mkv format but did work if I changed it to mp4 output format (I don't think this use case of reversing video only but not audio is common, but if you do run into this issue you can try changing the output format). On large input videos that exceed the RAM available in your computer, this method may not work and you may need to chop up the input file or use the old solution below.
Old Solution
One issue is the frame rate can vary depending on the video, many answers depend on a specific frame rate (like "-r 25" for 25 frames per second). If the frame rate in the video is different, this will cause the reversed audio and video to go out of sync.
You can of course manually adjust the frame rate each time (you can get the frame rate by running ffmpeg -i video.mkv and look for the number in front of the fps, this is sometimes a decimal number like 23.98). But with some bash code you can easily extract the fps, store it in a variable, and automatically pass it to the programs.
Based on this I've created the following bash script to do that. Simply chmod +x it and run it ./make-reversed-video.sh input.mkv output.mkv. The code is as follows:
#!/bin/bash
#Partially based on https://nhs.io/reverse/, but with some modifications, including automatic extraction of the frame rate.
#Get parameters.
VIDEO_FILE=$1
OUTPUT_FILE=$2
TEMP_FOLDER=$3
echo Using input file: $VIDEO_FILE
echo Using output file: $OUTPUT_FILE
mkdir /tmp/create_reversed_video
#Get frame rate.
FRAME_RATE=$(ffmpeg -i "$VIDEO_FILE" 2>&1 | grep -o -P '[0-9\\. ]+fps' | grep -o -P '[0-9\\.]+')
echo The frame rate is: $FRAME_RATE
#Extract audio from video.
ffmpeg -i "$VIDEO_FILE" -vn -ac 2 /tmp/create_reversed_video/audio.wav
#Reverse the audio.
sox -V /tmp/create_reversed_video/audio.wav /tmp/create_reversed_video/backwards.wav reverse
#Extract each video frame as an image.
ffmpeg -i "$VIDEO_FILE" -an -qscale 1 /tmp/create_reversed_video/%06d.jpg
#Recombine into reversed video.
ls -1 /tmp/create_reversed_video/*.jpg | sort -r | xargs cat | ffmpeg -framerate $FRAME_RATE -f image2pipe -i - -i /tmp/create_reversed_video/backwards.wav "$OUTPUT_FILE"
#Delete temporary files.
rm -rf /tmp/create_reversed_video
I've tested it and it works well on my Ubuntu 18.04 machine on lots of videos (after installing the dependencies like sox). Please let me know if it works on other Linux distributions and versions.

Resources