Extract first non black keyframe with FFMPEG - ffmpeg

I'm trying to extract a thumbnail image from a video keyframes using ffmpeg, my command line is:
ffmpeg -i video.mp4 -vframes 1 -s 200x200 -vf select="eq(pict_type\,PICT_TYPE_I)" -vsync 0 -f image2 video.jpg
But the keyframe it extracts is totally black (the video starts with a black frame i guess) ... is there a way to automatically extract the first non black keyframe from a video, without seeking to a specific time (i have to manage multiple videos of many durations) ?
Thanks

I cannot think of a solution using ffmpeg alone. But if you extract the first few keyframes (by turning up to -vframes 20 for example) they could then be analyzed with ImageMagic. Reducing the image to one grayscale color will it with pick the average gray value from the picture. A command line like
convert avatar.jpeg -colors 1 -type grayscale -format '%c' histogram:info:
which will produce an output like
16384: ( 80, 80, 80) #505050 gray(80)
(I used Simone's avatar picture for an example.) The last number is the most interesting for your case. It expresses how dark the image is, with 0 for ideal black and 255 for pure white. A sed script can easily extract it
convert ... | sed 's/^.*(\(.*\))$/\1/'
Mix it up with some shell scripting to find the first image that has a gray value higher than a given threshold and use it as the thumbnail.

With the option thumbnail=num_frame you can choose when you extract a frame, but I don't know if it is possible extract the first non balck keyframe. http://ffmpeg.org/ffmpeg.html#thumbnail

Related

How to compare images to find the least blurry image?

How to compare images to find the least blurry image?
I want to automaticallz generate an image/thumbnail from a video.
I use ffmpg for that.
However once in a while the image is totally blurred and I want to get rid of the blurry images.
My idea was to create multiple images per video and than compare the images to eachother.
Now the question:
Is there a way to compare the blurryness of images?
I had similar problem when choosing thumbnail for video.
My solution was to take 10 screenshots, each second apart and choose one with highest size.
Making screenshots with ffmpeg:
ffmpeg -y -hide_banner -loglevel panic -ss *secondsinmovie* -i movie.mp4 -frames:v 1 -q:v 2 screenshot.jpg
Then, to fine tune it. Take that second and iterate over frames and again, choose highest filesize

ffmpeg - create a video from a sprite

I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).

jpeg colors worse than png when extracting frames with ffmpeg?

When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.

What is the variable "a" in ffmpeg?

In using the scale filter with ffmpeg, I see many examples similar to this:
ffmpeg -i input.mov -vf scale="'if(gt(a,4/3),320,-2)':'if(gt(a,4/3),-2,240)'" output.mov
What does the variable a signify?
From the ffmpeg scale options docs.
a The same as iw / ih
where
iw Input Width ih Input Height
My guess after reading https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg is that a is the aspect ratio of the input file.
The example given on the webpage gives you an idea how to use it:
Sometimes there is a need to scale the input image in such way it fits
into a specified rectangle, i.e. if you have a placeholder (empty
rectangle) in which you want to scale any given image. This is a
little bit tricky, since you need to check the original aspect ratio,
in order to decide which component to specify and to set the other
component to -1 (to keep the aspect ratio). For example, if we would
like to scale our input image into a rectangle with dimensions of
320x240, we could use something like this:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'"
output_320x240_boxed.png
In the ffmpeg wiki "Scaling (resizing) with ffmpeg", they use this example:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'" output.png
The purpose of the gt(a,4/3) is, as far as I can tell, to determine the orientation (portrait or landscape) of the video (or image, in this case).
This wouldn't work for some strange aspect ratios (7:6, for an example, where gt(a,4/3) would incorrectly turn false.
It seems to me better to use the height and width of the video, so the above line would instead be:
ffmpeg -i input.jpg -vf scale="'if(gt(iw,ih),320,-1)':'if(gt(iw,ih),-1,240)'" output.png

Create single image ofmultiple video frames using FFMPEG

I would like to create an image from a couple of frames in a video, i.e. either create multiple images and blend them or create an image from multiple frames. I have been using FFMPEG to get a single frame image using the code below but cannot find a way to blend a couple of these images and save as a single image or make FFMPEG create the image from multiple frames.
-ss 00:05:40 -i video.AVI -vframes 1 -vf image.jpg'
Consider using ImageMagick - it is free and available here. It has C, C++, Perl and PHP bindings and can be used from the command line.
Depending on how you wish to merge your images, you could use one of these:
convert -evaluate-sequence mean a.jpg b.jpg c.jpg d.jpg
which takes 3 images (a.jpg, b.jpg, c.jpg), which I artificially added noise to, and then averages them to create d.jpg which you can see is less noisy. Or maybe you prefer to take the median of the pixels at each point:
convert -evaluate-sequence median a.jpg b.jpg c.jpg d.jpg
Or you could use:
composite -blend 30 foreground.jpg background.jpg blended.jpg
for a 30% blend.
EDITED
If your fish are dark in the image and you want to remove them, you can just choose the lighter of the two pixels in your images at every point like this:
convert a.jpg b.jpg -compose lighten -composite result.jpg

Resources