I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).
Related
I have N input animation frames as images in a folder and I want to create interpolated inbetween frames to create a smoother animation of length N * M, i.e. for every input frame I want to create M output frames that gradually morph to the next frame, e.g. with the minterpolate filter.
In other words, I want to increase the FPS M times, but I am not working with time as I am not working with any video formats, both input and output are image sequences stored as image files.
I was trying to combine the -r and FPS options, but without success as I don't know how they work together. For example:
I have 12 input frames.
I want to use the minterpolate filter to achieve 120 frames.
I use the command ffmpeg -i frames/f%04d.png -vf "fps=10, minterpolate" -r 100 interpolated_frames/f%04d.png
The result I get is 31 output frames.
Is there a specific combination of -r and FPS I should use? Or is there another way I can achieve what I need?
Thank you!
FFmpeg assigns a framerate of 25 to formats which don't have an inherent frame rate, like image sequences.
The image sequence demuxer has an option to set a framerate. And the minterpolate filter has an option for target fps.
ffmpeg -framerate 12 -i frames/f%04d.png -vf "minterpolate=fps=120" interpolated_frames/f%04d.png
How to compare images to find the least blurry image?
I want to automaticallz generate an image/thumbnail from a video.
I use ffmpg for that.
However once in a while the image is totally blurred and I want to get rid of the blurry images.
My idea was to create multiple images per video and than compare the images to eachother.
Now the question:
Is there a way to compare the blurryness of images?
I had similar problem when choosing thumbnail for video.
My solution was to take 10 screenshots, each second apart and choose one with highest size.
Making screenshots with ffmpeg:
ffmpeg -y -hide_banner -loglevel panic -ss *secondsinmovie* -i movie.mp4 -frames:v 1 -q:v 2 screenshot.jpg
Then, to fine tune it. Take that second and iterate over frames and again, choose highest filesize
With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.
I'm trying to use FFMPEG to create a video with one video overlayed on top another.
I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.
I found two ways to overlay one video on another:
First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it:
ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y
And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y
My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.
The notional way to do this is to chroma-key the black out and then overlay, But as #MoDJ said, this likely won't produce satisfactory results. Neither will the method I suggest below, but it's worth a try.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(gt(lum(X,Y),16),255,0)',hue=s=0[al];
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
Above, I duplicate the overlay video stream, then use the geq filter to manipulate the luma values so that any pixel with luma greater than 16 (i.e. not pure black) has its luma set to white, else zero. Since I haven't provided expressions for the two color channels, geq falls back on the luma expression. We don't want that, so I use the hue filter to nullify those channels. Then I use the alphamerge filter to merge this as an alpha channel with the first copy of the overlay video. Then, the overlay. Like I said, this may not produce satisfactory results. You can tweak the value 16 in the geq filter to change the black threshold. Suggested range is 16-24 for limited-range (Y: 16-235) video files.
You will not be able to get a "replace black pixels" approach to work properly. What you actually want is a foreground video with a real alpha channel that can be manipulated and tested before doing an overlay on a background. For an extended example that describes the problems, please take a look at my blog post on the subject. When using FFMPEG, an easy way to import alpha channel video is to use Quicktime with the Animation codec video at 32 BPP.
I'm trying to extract a thumbnail image from a video keyframes using ffmpeg, my command line is:
ffmpeg -i video.mp4 -vframes 1 -s 200x200 -vf select="eq(pict_type\,PICT_TYPE_I)" -vsync 0 -f image2 video.jpg
But the keyframe it extracts is totally black (the video starts with a black frame i guess) ... is there a way to automatically extract the first non black keyframe from a video, without seeking to a specific time (i have to manage multiple videos of many durations) ?
Thanks
I cannot think of a solution using ffmpeg alone. But if you extract the first few keyframes (by turning up to -vframes 20 for example) they could then be analyzed with ImageMagic. Reducing the image to one grayscale color will it with pick the average gray value from the picture. A command line like
convert avatar.jpeg -colors 1 -type grayscale -format '%c' histogram:info:
which will produce an output like
16384: ( 80, 80, 80) #505050 gray(80)
(I used Simone's avatar picture for an example.) The last number is the most interesting for your case. It expresses how dark the image is, with 0 for ideal black and 255 for pure white. A sed script can easily extract it
convert ... | sed 's/^.*(\(.*\))$/\1/'
Mix it up with some shell scripting to find the first image that has a gray value higher than a given threshold and use it as the thumbnail.
With the option thumbnail=num_frame you can choose when you extract a frame, but I don't know if it is possible extract the first non balck keyframe. http://ffmpeg.org/ffmpeg.html#thumbnail