I have two images and I want to create a simple fading transition between them.
I also want the final output to be a sequence of images rather than a video?
So if the fading transition was 10 frames long I'd want the output to be a sequence of 10 images.
How can I achieve this with ffmpeg?
See the blend video filter:
ffmpeg -loop 1 -i input0.png -loop 1 -i input1.png -filter_complex "[1:v][0:v]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'" -t 4 frames_%04d.png
This example will make a 3 second cross-fade of input1.png over input0.png.
To crossfade/dip-to-black multiple images see Create video with 5 images with fade-in/out effect in ffmpeg.
To the best of my knowledge you cannot achieve this just with ffmpeg. Please take a look at MLT framework if you want to do it in scriprs; take a look at openshot if you want an interactive app.
Try this:
ffmpeg \
-loop 1 -t 3 -i input1.png \
-loop 1 -t 3 -i input2.png \
-loop 1 -t 3 -i input3.png \
-loop 1 -t 3 -i input4.png \
-loop 1 -t 3 -i input5.png \
-filter_complex \
"[1:v][0:v]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'[v0]; \
[2:v][1:v]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'[v1]; \
[3:v][2:v]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'[v2]; \
[4:v][3:v]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'[v3]; \
[v0][v1][v2][v3]concat=n=4:v=1:a=0[v]"
-map "[v]" out.mp4
Haven't tried with images, but you could try -t 12 frames_%04d.png at the end or whatever.
Related
Given an xfade transition filter such as this:
ffmpeg -loop 1 -t 5 -i 1.png -loop 1 -t 5 -i 2.png -filter_complex "[0][1]xfade=transition=slideleft:duration=1:offset=4,format=yuv420p" output.mp4
Is it possible to alter the timing/easing of the xfade transition? For instance, the above slideleft transition seems to be linear in the output video. How could one achieve a non-linear easing such as a cubic ease in for the transition between the two clips?
xfade custom transition, slideleft with easing
ffmpeg \
-loop 1 -t 1.1 -i in1.png \
-loop 1 -t 1.1 -i in2.png \
-filter_complex "
[0]scale=-2:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v0];
[1]scale=-2:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v1];
[v0][v1]xfade=duration=1:offset=0.1:transition=custom:
expr='
if(lt(X+W*pow(1-P,3),W),
if(eq(PLANE,0),a0(X+W*pow(1-P,3),Y),0)
+if(eq(PLANE,1),a1(X+W*pow(1-P,3),Y),0)
+if(eq(PLANE,2),a2(X+W*pow(1-P,3),Y),0)
+if(eq(PLANE,3),a3(X+W*pow(1-P,3),Y),0)
,if(eq(PLANE,0),b0(X-W+W*pow(1-P,3),Y),0)
+if(eq(PLANE,1),b1(X-W+W*pow(1-P,3),Y),0)
+if(eq(PLANE,2),b2(X-W+W*pow(1-P,3),Y),0)
+if(eq(PLANE,3),b3(X-W+W*pow(1-P,3),Y),0)
)'
" -y /tmp/output.mp4
ffplay -loop 0 /tmp/output.mp4
I am trying to making video with ffmpeg where I want to overlay images on a video.
I want to show the image for 5 secound each and want to the process to loop until the video end.
I am using following commend which working perfectly but want to modify to loop the images.
ffmpeg -y -i long_process/2-scrolling.mp4 \
-i upload-images/040820221255452.png \
-i upload-images/040820221255453.png \
-filter_complex "[0:v][1:v]overlay=75:(H-h)/2:enable='between(t, 1, 5)'[v0]; \
[v0][2:v]overlay=75:(H-h)/2:enable='between(t, 5, 10)'" \
-c:a copy long_process/output.mp4
I am very new to ffmpeg looking for help from you.
Thanks in advance
I got the answer
ffmpeg -y -i long_process/2-scrolling.mp4 -framerate 1/3 -pattern_type glob -loop 1 -i 'tools/*.png' \
-filter_complex "[0]overlay=75:(H-h)/2:shortest=1" \
-r 60 -c:a copy long_process/output.mp4
I'm trying to concatenate a 15 second clip of a video (MOVIE.mp4) with 5 seconds (no audio) of an image (IMAGE.jpg) using FFmpeg.
Something seems to be wrong with my filtergraph, although I'm unable to determine what. The command I've put together is the following:
ffmpeg \
-loop 1 -t 5 -I IMAGE.jpg \
-t 15 -I MOVIE.mp4 \
-filter_complex "[0:v]scale=480:640[1_v];anullsrc[1_a];[1:v][1:a][1_v][1_a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Unfortunately, this seems to be creating some strange results:
On my personal computer (FFmpeg 4.2.1) it correctly concatenates the movie with the static image; however, the static image lasts for an unbounded length of time. (After entering ctrl-C, the movie is still viewable, but is of an extremely long length--e.g., 35 min--depending on when I interrupt the process.)
On a remote machine where I need to do the ultimate video processing (FFmpeg 2.8.15-0ubuntu0.16.04.1), the command does not terminate, and instead, I get cascading errors of the following form:
Past duration 0.611458 too large
...
[output stream 0:0 # 0x21135a0] 100 buffers queued in output stream 0:0, something may be wrong.
...
[output stream 0:0 # 0x21135a0] 100000 buffers queued in output stream 0:0, something may be wrong.
I haven't been able to find much documentation that elucidates what these errors mean, so I don't know what's going wrong.
As Gyan pointed out, you only have to add atrim to your audio:
anullsrc,atrim=0:5[silent-audio]
Instead of scale you could use scale2ref and setsar to automatically make your image the same size and aspect ratio as the video.
ffmpeg \
-loop 1 -t 5 -i IMAGE.jpg \
-t 15 -i MOVIE.mp4 \
-filter_complex "[0:v][1:v]scale2ref[img][v];[img]setsar=1[img]; \
anullsrc,atrim=0:5[silent-audio];[v][1:a][img]
[silent-audio]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Alternatively you could use anullsrc as a 3rd input:
ffmpeg \
-t 15 -i MOVIE.mp4 \
-loop 1 -t 5 -i IMAGE.jpg \
-f lavfi -t 5 -i anullsrc \
-filter_complex "[1:v][0:v]scale2ref[img][v];\
[img]setsar=1[img];[v][0:a][img][2:a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
I are using ffmpeg to create video from many images.
Example I'm using command below to create 1 video.
Sample Command.
ffmpeg \
-loop 1 -t 5 -i img001.jpeg \
-loop 1 -t 5 -i img002.jpeg \
-loop 1 -t 5 -i img003.jpeg \
-loop 1 -t 5 -i img004.jpeg \
-loop 1 -t 5 -i img005.jpeg \
-filter_complex \
"[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=out:st=4:d=1[v0]; \
[1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v1]; \
[2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
[3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
[4:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4
But I'm having a problem. For example, image001.png, I only know add single animation for this image when creating the video.
How do I add multi animation to an image? For example, rotate, move from top to center, then from center to right.
Thankyou very much.
I am trying to add multiple items to an ffmpeg command and am getting stuck.
So far in the command I am automatically updating one image, which I'm using as a video, I also want to add a logo and two lines of text.
I have been successful until the last item, which is the logo overlay.
This is the relevant part of code:
ffmpeg \
-f image2 -loop 1 \
-y \
-i "/var/www/html/image_rotate.png" \
-re \
-i audio.mp3 \
-vf "movie=/var/www/html/overlay_logo.png [watermark]; [in][watermark] overlay=0:0 [out], drawtext=fontsize=10:fontfile=/var/www/html/OpenSans-Regular.ttf:textfile=/var/www/html/text1.txt:box=1:boxcolor=#000000:fontcolor=#FFFFFF:x=0:y=(h-text_h-20):reload=1, drawtext=fontsize=10:fontfile=/var/www/html/OpenSans-Regular.ttf:textfile=/var/www/htmltext2.txt:box=1:boxcolor=#000000:fontcolor=#FFFFFF:x=0:y=(h-text_h-30)" \
This gives me the following error:
Simple filtergraph ... was expected to have exactly 1 input and 1 output. However, it had >1 input(s) and >1 output(s). Please adjust, or use a complex filtergraph (-filter_complex) instead.
If I remove the last part I added (the overlay logo) I do not get the error.
If I add multiple -vf it only processes one (the text OR the logo).
I'm not sure how to achieve this.
When you need to work with multiple streams while filtering, the recommended method is to use a filter_complex.
ffmpeg \
-loop 1 \
-i "/var/www/html/image_rotate.png" \
-i "/var/www/html/overlay_logo.png" \
-i audio.mp3 \
-filter_complex "[0][1]overlay=0:0,drawtext=fontsize=10:fontfile=/var/www/html/OpenSans-Regular.ttf:textfile=/var/www/html/text1.txt:box=1:boxcolor=#000000:fontcolor=#FFFFFF:x=0:y=(h-text_h-20):reload=1, drawtext=fontsize=10:fontfile=/var/www/html/OpenSans-Regular.ttf:textfile=/var/www/htmltext2.txt:box=1:boxcolor=#000000:fontcolor=#FFFFFF:x=0:y=(h-text_h-30)" \
-y \
-shortest \
The logo is now fed as a regular input.