I would like to create an image from a couple of frames in a video, i.e. either create multiple images and blend them or create an image from multiple frames. I have been using FFMPEG to get a single frame image using the code below but cannot find a way to blend a couple of these images and save as a single image or make FFMPEG create the image from multiple frames.
-ss 00:05:40 -i video.AVI -vframes 1 -vf image.jpg'
Consider using ImageMagick - it is free and available here. It has C, C++, Perl and PHP bindings and can be used from the command line.
Depending on how you wish to merge your images, you could use one of these:
convert -evaluate-sequence mean a.jpg b.jpg c.jpg d.jpg
which takes 3 images (a.jpg, b.jpg, c.jpg), which I artificially added noise to, and then averages them to create d.jpg which you can see is less noisy. Or maybe you prefer to take the median of the pixels at each point:
convert -evaluate-sequence median a.jpg b.jpg c.jpg d.jpg
Or you could use:
composite -blend 30 foreground.jpg background.jpg blended.jpg
for a 30% blend.
EDITED
If your fish are dark in the image and you want to remove them, you can just choose the lighter of the two pixels in your images at every point like this:
convert a.jpg b.jpg -compose lighten -composite result.jpg
Related
I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).
I am using Montage command-line tool to merge the two jpg images. The output jpg contains the common strip present in the input jpgs. Below is the command to merge two jpgs:
montage -geometry 500 input1.jpg input2.jpg output.jpg
How can I avoid the common area in the output file?
Is there any other tool available to auto-merge the two images?
In ImageMagick, you can simply append the two images side by side or top/bottom.
convert image1.jpg image2.jpg -append result.jpg
will do top/bottom
convert image1.jpg image2.jpg +append result.jpg
will do left/right.
You can append as many images as you want of different sizes. You can use the -gravity setting to align them as desired. If different sizes, then you will have background regions, which you can control the color by using -background somecolor. If desired, you can resize the images by adding -resize 500 after reading the inputs and before the append.
See http://www.imagemagick.org/Usage/layers/#append
I suspect you are trying to make a panoramic by stitching two images with an area of common overlap.
So, if we start with left.png:
and right.png:
You probably want this:
convert left.png -page +200 right.png -mosaic result.png
Just so you can see what happens if I change the x-offset and also how to add a y-offset:
convert left.png -page +280+30 right.png -mosaic result.png
If you want to do what Mark Setchell is suggesting, then using -page is probably the best method, if you have more than one image to merge and the offsets are different. If you only have on pair of image, you can overlap them using +smush in ImageMagick. It is like +append, but allows either overlap or a gap according to the sign of the argument. Unlike -page, it only shifts in one direction according to +/- smush. Using Mark's images,
convert left.jpg right.jpg +smush -400 result.jpg
With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.
I'm playing with this creative script here: http://www.fmwconcepts.com/imagemagick/transitions/. The plan is to mimic what happens with the script with ffmpeg and generate video with transition effects between pictures. My current understanding is this:
I have two pictures A and B.
I need in between a couple of pictures (say 15) that are partially A and partially B.
To do that I use the composite -compose src-over A.jpg B.jpg mask-n.jpg out.jpg command.
During the process, the mask-n.jpg gets generated automatically that gradually change from all black to all white.
Depends on the mathematically equations, the way the transition effect looks is different.
In one of the example, Fred the author gave this:
convert -size 128x128 gradient: maskfile.jpg
This will generate a image like this:
This is partially black and partially white. For the transition to work, I'll need an all white one and an all black one and a couple of others in between. What's the magical command to do that?
I have re-read your question and I am still not sure I understand, but maybe you want a dark grey to light grey gradient:
convert -size 128x128 gradient:"rgb(40,40,40)-rgb(200,200,200)" greygrad.png
Not sure I understand what you are trying to achieve, but if you want an all black one, use:
convert -size 128x128 xc:black black.jpg
and an all white one:
convert -size 128x128 xc:white white.jpg
and a grey one:
convert -size 128x128 xc:gray40 gray40.jpg
If you want to join them for transitions, use
convert im1.jpg im2.jpg -append result.jpg
or use +append to join side by side instead of above and below.
Consider using PNG instead of JPEG throughout.
Fred tells you how the script works at the bottom of the page you have linked to with some example code.
According to his explanation there is only the one mask images as:
The mask images is gradually made lighter
I'm trying to extract a thumbnail image from a video keyframes using ffmpeg, my command line is:
ffmpeg -i video.mp4 -vframes 1 -s 200x200 -vf select="eq(pict_type\,PICT_TYPE_I)" -vsync 0 -f image2 video.jpg
But the keyframe it extracts is totally black (the video starts with a black frame i guess) ... is there a way to automatically extract the first non black keyframe from a video, without seeking to a specific time (i have to manage multiple videos of many durations) ?
Thanks
I cannot think of a solution using ffmpeg alone. But if you extract the first few keyframes (by turning up to -vframes 20 for example) they could then be analyzed with ImageMagic. Reducing the image to one grayscale color will it with pick the average gray value from the picture. A command line like
convert avatar.jpeg -colors 1 -type grayscale -format '%c' histogram:info:
which will produce an output like
16384: ( 80, 80, 80) #505050 gray(80)
(I used Simone's avatar picture for an example.) The last number is the most interesting for your case. It expresses how dark the image is, with 0 for ideal black and 255 for pure white. A sed script can easily extract it
convert ... | sed 's/^.*(\(.*\))$/\1/'
Mix it up with some shell scripting to find the first image that has a gray value higher than a given threshold and use it as the thumbnail.
With the option thumbnail=num_frame you can choose when you extract a frame, but I don't know if it is possible extract the first non balck keyframe. http://ffmpeg.org/ffmpeg.html#thumbnail