match an image to a specific frame within a video with ffmpeg - image

I have some images that were taken from a video via screen capture. I would like to know when in the video these images appear (timestamps). Is there a way to programmatically match an image with a specific frame in a video using ffmpeg or some other tool?
I am very open to different technologies as I'm eager to automate this. It would be extremely time consuming to do this manually.

You can get the psnr between that image and each frame in the video, and the match is the frame with the highest psnr. ffmpeg has a tool to calculate the psnr in tests/tiny_psnr which you can use to script this together, or there's also a psnr filter in the libavfilter module in ffmpeg if you prefer to code rather than script.
Scripting, you'd basically decode the video to a FIFO, decode the image to a file, and then match the FIFO frames repeatedly against the image file using tiny_psnr, selecting the framenumber for the frame with highest psnr. The output will be a frame-number, which (using fps output on the commandline) you can approximately convert to a timestamp.
Programming-wise, you'd decode the the video and image to AVFrame, use the psnr filter to compare the two, and then look at the output frame metadata to record the psnr value in your program, and search for the frame with the highest metadata psnr value, and for that frame, AVFrame->pkt_pts would be the timestamp.

Related

Pass image rectangle with the only changes to ffmpeg libavcodec encoder

I am getting a list of small rentangle images with contain the parts of the image that have changed from the previous image. This results from the desktop image capture with directx11 which provides what parts of the desktop image have changed and the rectangles from them.
I am trying to figure out if I can pass them to ffmpeg libavcodecs encoder for h.264. I looked into AVFrame and didn't see a way to specify the actual parts that have changed from the previous image.
Is there a way to actually do this, when passing an image to the ffmpeg codecContext to encode it in the video, to just pass the changed parts from the previous frame? Maybe doing this will reduce the amount of CPU usage because this is for a live stream.
I use the standard avcodec_send_frame to send a frame to the codec for encoding, it only has an AVframe and a codec context as parameters.

Using blurdetect to filter blurred keyframes in ffmpeg5

I want to extract the keyframes of a video by ffmpeg and determine if each keyframe is blurred or not using a predefined thershold. I noticed the new blurdetect filter in ffmpeg5, so I tried the following command:
ffmpeg -i test.mp4 -filter_complex "select=eq(pict_type,I),blurdetect=block_width=32:block_height=32:block_pct=80" -vsync vfr -qscale:v 2 -f image2 ./I_frames_ffmpeg/image%08d.jpg
Using this command I can get the keyframes and at the end in the terminal I can see the average blur value of those frames being printed out.
blur mean
My question is, can I use the blurdetect filter to get the blur value for each frame? Can I use this blur value as a precondition for keyframe selection, e.g. only select this frame as a keyframe if the blur value is less than 5?
Yes, blurdetect filter pushes the blur value of each frame to stream metadata, which you can capture with metadata filter. Try the following filtergraph:
select=eq(pict_type,I),\
blurdetect=block_width=32:block_height=32:block_pct=80,\
metadata=print:file=-
The metadata filter outputs to stdout, so you'll see 2 lines for each frame like:
frame:1295 pts:1296295 pts_time:43.2098
lavfi.blur=4.823009
Note that the terminal may get cluttered with other logs, but these lines should be the only lines actually on stdout (standard logs are on stderr) so you should be able to capture easily. From there a simple regex should help you retrieve the blur values.
Can I use this blur value as a precondition for keyframe selection, e.g. only select this frame as a keyframe if the blur value is less than 5?
I believe (not verified) that metadata filter can do exactly this:
metadata=select:key=lavfi.blur:value=5:function=less
Not the best documentation, but it's all there

Video frame extraction using ffmpeg

I am stuck at a problem of frame extraction using ffmpeg. I am pointing out a given frame duration in video editors like filmora and shotkut (both coherent upto 2 places of milliseconds) and then I am using that duration in ffmpeg to extract all the frames at the native framerate. However, I don't get the perfect coherence when I see the first frame extracted and the corresponding image in editors(vlc, wondershare, filmora all same) both are different.
Please find an example of the command below :
ffmpeg -i "/mnt/sda1/Downloaded_Videos/25mm_Videos/24-12_21/0840.mp4" -ss 00:09:50.18 -to 00:10:49.22 /mnt/sda1/ExtractedFrames/25mm/24Dec_test/frame%5d_0r_0840_00095018_00095018.png
The extracted frame frame_0_0r_0840_00095018_00095018.png is different from the image or frame loaded in editors and vlc player when going to timestamp : 00:09:50.18.
Thanks for the valuable comment #Gyan. The framecount does the trick for ex.
00:00:10:20 -> signifies the 20th frame of the 10th second in the video
whereas
00:00:10.20 -> signifies the frame at the 200th milli-second for the 10th second of the video.
Conversion : Know the video fps : 00:00:10:20 = 00:00:10.(20 * 1000)/fps

Record from camera, save to file, and acess last recorded frame

I want to record video from a camera, save it to file, and at the same time have access to the last frame recorded.
One idea would be to use ffmpeg's Multiple Outputs functionality where I split the stream into two, one gets saved to file, one spits out the last recorded frame (ideally, the frames won't need to be written to disk, but piped onwards for processing).
What I don't know is how to get ffmpeg to spit "the last frame" from a stream.
Any ideas?
Output a video and continuously update an image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 -update 1 image.jpg
Output a video and output a new image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 images_%04d.jpg
Output will be named images_0001.jpg, images_0002.jpg, etc.
Also see
FFmpeg image muxer documentation for more info and options.
How can I extract a good quality JPEG image from a video file with ffmpeg?

extract high definition image from 1080i video with ffmpeg in linux

I am right now using ffmpeg library to extract image from 1080i YUV 422 raw file. As i use interlaced data, it will drop some lines when i extracct image from one frame of video, Is it possible to merge two or three frame and make a single high definition image? Please guide me to move forward

Resources