How to use timeline editing with a single image input in ffmpeg? - animation

Small image should be animated over a background video in a simple way:
change position - move along a straight line, no easing. Starting at frame A, till frame B (i.e. frames 11 to 31);
zoom in - between frames C and D (i.e. 45 and 55).
Filters I intend to use:
overlay filter has x and y parameters for image position;
zoompan filter allows zooming (preceeded with a static scale up to avoid jitter).
My filtergraph:
video.avi >----------------------------------->|-------|
|overlay|-> out.mp4
image.png >-> scale >-> zoompan >-> zoompan >->|-------|
The problem is with timeline editing. Both filter support the enable option. I thought I could add instructions like enable='between(n, 11, 31)' to "place" the animations at right times.
Appears that the image input has only two values of n: zero and 1. Checked that by wrapping n with print(n) in zoompan filter to output during rendering.
Inside overlay filter, in opposite, n outputs sequence of numbers as expected.
Question: how can I make the single image input "look" like a normal video stream to ffmpeg filters – so that every generated frame has its unique number?
One of the latest tests. Video is hd720, image is 1000x200 transparent png with the logo occupying about 150x50 area in the center, not to be cropped out when zoomed in.
ffmpeg -i $FOOTAGE -loop 1 -i $IMAGE -filter_complex \
"
[1:v]
scale=10*iw:-2
,zoompan=
z='1'
:x='iw/2-(iw/zoom/2)+80'
:y='ih/2-(ih/zoom/2)'
:d=26
:s=500x100
:enable='lt(print(n),24)'
,zoompan=
z='min(zoom+1.3/18,2.3)'
:x='iw/2-(iw/zoom/2)'
:y='ih/2-(ih/zoom/2)'
:d=20
:s=500x100
:enable='between(n,24,42)'
[name];
[0:v][name]
overlay=
x=1005-250
:y=406-50
:enable='lte(n,173)'
" -t 7 -y -hide_banner out.mp4

Appears, zoompan filter does not support timeline editing. In commit aa26258f dated August 27, 2017, this was updated in ffmpeg and it no longer lists zoompan as a timeline-enabled filter.
The workaround is to write expressions that depend on in "Input frame count" variable and output desired zoom factor.

Related

Using blurdetect to filter blurred keyframes in ffmpeg5

I want to extract the keyframes of a video by ffmpeg and determine if each keyframe is blurred or not using a predefined thershold. I noticed the new blurdetect filter in ffmpeg5, so I tried the following command:
ffmpeg -i test.mp4 -filter_complex "select=eq(pict_type,I),blurdetect=block_width=32:block_height=32:block_pct=80" -vsync vfr -qscale:v 2 -f image2 ./I_frames_ffmpeg/image%08d.jpg
Using this command I can get the keyframes and at the end in the terminal I can see the average blur value of those frames being printed out.
blur mean
My question is, can I use the blurdetect filter to get the blur value for each frame? Can I use this blur value as a precondition for keyframe selection, e.g. only select this frame as a keyframe if the blur value is less than 5?
Yes, blurdetect filter pushes the blur value of each frame to stream metadata, which you can capture with metadata filter. Try the following filtergraph:
select=eq(pict_type,I),\
blurdetect=block_width=32:block_height=32:block_pct=80,\
metadata=print:file=-
The metadata filter outputs to stdout, so you'll see 2 lines for each frame like:
frame:1295 pts:1296295 pts_time:43.2098
lavfi.blur=4.823009
Note that the terminal may get cluttered with other logs, but these lines should be the only lines actually on stdout (standard logs are on stderr) so you should be able to capture easily. From there a simple regex should help you retrieve the blur values.
Can I use this blur value as a precondition for keyframe selection, e.g. only select this frame as a keyframe if the blur value is less than 5?
I believe (not verified) that metadata filter can do exactly this:
metadata=select:key=lavfi.blur:value=5:function=less
Not the best documentation, but it's all there

Using FFMPEG to create animated GIF from series of images and insert text for each image

I am generating an animated gif from a series of png's labeled img_00.png, img_01.png, etc. I want to insert text to the top right corner of the animated gif for each frame that is generated from the png to display some specific information. For example say I have 3 pngs, img_00, img_01, and img_02...what I want from the gif is:
For frame generated from img_00, display "This is from img_00".
For frame generated from img_01, display "This is from img_01".
For frame generated from img_02, display "This is the last image generated from img_02!".
So far I have been messing around with drawtext option (assuming framerate=1):
ffmpeg -f image2 -framerate 1 -i img_%02d.png -filter_complex "drawtext=enable='between(t,0,1)':text='word1':fontsize=24:fontcolor=white:x=w-tw:y=0,drawtext=enable='between(t,1,2)':text='word2':fontsize=24:fontcolor=white:x=w-tw:y=0" out.gif
But I am getting "word1" and "word2" overlapped on top of each other. Is there a better way of doing this or someway to fix drawtext so the overlap doesn't happen?
between(t,0,1) and between(t,1,2) will overlap for t=1. Either the end time for the first range or the start time for the second range should be adjusted e.g. you can make the first range between(t,0,0.9).

why zoom effects only applies to first image?

why zoom effects only applies to first image ?
ffmpeg -i img%03d.jpeg -i 1.mp3 -vf
zoompan=z='zoom+0.002':d=25*5:s=1280x800 -pix_fmt yuv420p -c:v libx264
-t 01:05:00 out12345.mp4
I have 3 images, 1 audio, and I am trying to create a video and expecting each image to have zoom effects.
Here is what I am getting, First image shows zoom effect then 2nd image shows up for a split second and then last image stays without any effect.
What am I doing wrong ?
The zoompan filter operates per frame, so normally the command should produce the desired result i.e. each frame gets zoomed in over 125 frames.
However, when an image in the stream has different properties, the filtergraph is reinitialized, so a new zoompan instance is created, which starts on the changed frame as if starting from scratch. This new set of output has the same timestamps as already output frames so they are dropped.
There are two workarounds to prevent reinitialization:
1) make sure all frames in the input are uniform in properties
or
2) forcibly prevent reinitialization by adding -reinit_filter 0 before the input. Only a few filters can handle frames with changing properties, so avoid doing this unless you are sure.

ffmpeg how to crop and scale at the same time?

I'm trying to convert a video with black bars, to one without and if the source is 4k, I want the video to be converted to 1080p
Now to do this, I'm using the following command:*
ffmpeg -i input ... -filter:v "crop=..." -filter:V "scale=1920:-1" ouput
But running this, I found that the end product still has said black bars and is 1920x1080 as opposed to the 1920x800 I'd expect.
What gives, why does this not work?
*: Other settings have been left out for convenience.
I got it to work by putting both the crop and the scale in the same -vf tag. I was cropping and then increasing the size of an old video game, and I just did this:
-vf crop=256:192:2:16,scale=-2:1080:flags=neighbor
I knew it worked as soon as I saw it display the output file size as 1440x1080 (4:3 ratio at 1080p).

FFmpeg fade effects between frames

I want to create a slideshow of my images with fade in & fade out transitions between them and i am using FFmpeg fade filter.
If I use command:
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
To create the output video with fade effect, then it gives output video with first 5 frames black and than images are shown with fade in effect but i want fade:in:out effect between frame change.
How can i do that?
Please tell a solution for Centos server because i am using FFmpeg on this server only
To create a video with fade effect, just break the video into parts and create separate videos for each image. For instance, if you have 5 images then firstly, create 50-60 copies of each image and obtain a video for that:
$command= "ffmpeg -r 20 -i images/%d.jpg -y -s 320x240 -aspect 4:3 slideshow/frame.mp4";
exec($command." 2>&1", $output);
This will allow you to create 5 different videos. Then, you need 10-12 different copies of those five images and again create separate videos with fade effects.
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
After this you will have videos like: video for image 1 and its fade effect then for image 2 and its fade effect and so on. Now combine those videos in respective order to get the whole video.
For combining the videos you need:
$command = "cat pass.mpg slideshow/frame.mpg > final.mpg";
This means to join the videos using cat and then you need to convert them to mpg, join them and again reconvert them to mp4 or avi to view them properly. Also the converted mpg videos will not be proper so do not bother. When you convert them to mp4, it will be working fine.
You can make a slideshow with crossfading between the pictures, by using the framerate filter. In the following example 0.25 is the framerate used for reading in the pictures, in this case 4 seconds for each picture. The parameter fps sets the output framerate. The parameters interp_start and interp_end can be used for changing the fading effect: interp_start=128:interp_end=128 means no fading at all. interp_start=0:interp_end=255 means continuous fading. When one picture has faded out and the next picture has fully faded in, the third picture will immediately begin to fade in. There is no pause for showing the second picture. interp_start=64:interp_end=191 means half of the time is pause for showing the pictures and the other half is fading. Unfortunately it won't be a full fading from 0 to 100%, but only from 25% to 75%. That's not exactly what you might want, but better than no fading at all.
ffmpeg -framerate 0.25 -i IMG_%3d.jpg -vf "framerate=fps=30:interp_start=64:interp_end=192:scene=100" test.mp4
You can use gifblender to create the blended, intermediary frames from your images and then convert those to a movie with ffmpeg.

Resources