I am generating series of images for text scrolling, I need to overlay those images on a video at specific interval (for example from 10-15sec), how can we do that using ffmpeg?
According to FFmpeg developer Stefano Sabatini, this cannot currently be done.
Right now you can't do that by using the overlay, you need a multi-steps
process, where you need to split the file, overlay just in the interested
segments and merge again the modified segments, or create an ad-hoc video to
overlay which shows the image only in the interested intervals.
ffmpeg.org/pipermail/ffmpeg-user/2012-January/004062.html
Related
I'm trying to automate process of adding a video (with drop shadow) on top a a background image . The video size would be slightly smaller than the images. Both sizes are known and fixed.
I have a video and few images. I know two places in the video where I want to paste these images. But they shouldn't have fixed position and size. On the contrary, images should move, change their tilt angle and scale. For example you may imagine closed book and you want to overlay its name when the book is slowly opens.
I read FFMPEG documentation but didn't found anything about this. Can FFMPEG do this? If not, which libraries or methodics can do that?
The FFMPEG overlay filter can overlay one stream atop another.
It takes an expression which is evaluated per frame to determine the position.
https://ffmpeg.org/ffmpeg-filters.html#overlay-1
You may consider creating a filter chain to do the following.
1) Create a transparent image with the title of your book.
2) Use a 3D rotate filter to convert the single image into an animated sequence
3) Use the overlay filter to apply the animated stream atop your book video.
With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.
Is it possible to do something like this purely with ffmpeg?
Lets say we have a text file with the frame by frame coordinates for the 4 corners where the image should go. ffmpeg has a perspective filter, but how would one get ffmpeg to pull the frame coordinates from the text file? I'm guessing with a pipe of sorts?
The perspective filter corrects the input's perspective, it doesn't apply a perspective effect. Applied to an overlay it results in a rectangular overlay with a corrected perspective.
The closest you can get with the already implemented filters is via the frei0r perspective module.
You can write your own filter for ffmpeg or a frei0r module.
Update: using #Mulvya's tip you can use timeline editing with perspective:
perspective=enable='eq(n,0)':x0=...,perspective=enable='eq(n,1)':x0=...
where n is the current frame number.
This will result in an impossibly long command line which may go over the system limit. You're still better writing your own filter.
You can alternatively do one frame at a time with a different command, save the output as an image and re-assemble the video at the end.
Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.
When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg