I want to convert images to video which works fine with ffmpeg but i need to add ken burns effect to every images.
the code i was to get only only works on the last and first images (the effect i mean).
ffmpeg -y -i %d.jpg -t 25 -pix_fmt yuv420p -vf zoompan=z='zoom+0.001':s=1280x800,scale=hd1080 -c:v libx264 -preset fast -crf 22 -t 300 -threads 2 zoomout.mp4
SOLVED
the problem is from the images, so i used new set of images
Related
I'm trying to read and write videos using ffmpeg, and I got an interesting phenomenon where the first frame is not the same in videos I create that are made from the same frames, only with different lengths.
The commands I'm running to reproduce the problem:
ffmpeg -i <some_video>.mp4 -frames:v 20 -q:v 3 resource_images/00%04d.png
ffmpeg -hide_banner -loglevel error -framerate 30 -y -i resource_images/00%04d.png -c:v libx264 -pix_fmt yuv420p -frames:v 20 long_video.mp4 -y
ffmpeg -hide_banner -loglevel error -framerate 30 -y -i resource_images/00%04d.png -c:v libx264 -pix_fmt yuv420p -frames:v 10 short_video.mp4 -y
ffmpeg -i long_video.mp4 -vf "select=eq(n,0)" -q:v 3 long_frame0.png -y
ffmpeg -i short_video.mp4 -vf "select=eq(n,0)" -q:v 3 short_frame0.png -y
The images long_frame0.png and short_frame0.png are different (I loaded them using Python and compared them, there are many differences).
I find it very peculiar, since I create very short videos, it's those videos first frames, and they are keyframes of those videos (I checked it using ffprobe)
What is the cause of this issue and how do I overcome it to create a consistent first frame for a video, regardless of the video length?
I am trying to create a video from still images using ffmpeg. The command I use to do this is
ffmpeg -y -r 3 -i input_images%03d.png -c:v libx264 -vf fps=24 -pix_fmt yuv420p output.mp4
However, I would like to overlay this video on still image, without creating a video of the still image first. So, for example, if I have the following images
[still, frame1, frame2, frame3]
I'd like a command to create a video of frame1, frame2, and frame3 overlayed on still.
all with one command. Is there a way to do this?
I've looked at several answers to related problems (e.g., Add image overlay on video FFmpeg) but they don't answer my question, exactly.
Use
ffmpeg -framerate 24 -i still.png -framerate 3 -i input_images%03d.png -c:v libx264 -filter_complex "overlay=x='(W-w)/2':y='(H-h)/2'" -pix_fmt yuv420p -y output.mp4
I've been able to add a random color watermark with this code:
ffmpeg -y -r 100 -i "N%3d.tif" -c:v libx264 -vf "drawbox=y=0:color=random#1:width=8:height=ih:t=fill,scale=1920:1080" -crf 30 -g 10 -profile:v high -level 4.1 -pix_fmt yuv420p test.mp4
And I know that it's doable with a script and processing each input frame individually, but I would really like to find a way with FFMpeg to add the watermark during the actual video encoding. It needs to be a unique color per frame. Any ideas on how to accomplish this?
Thanks!
The drawbox expression is only evaluated once. But the hue filter can be used to vary the color.
In the command below, a small portion from the left side of the frame is cropped off, a color is drawn once, and then its hue varied. This is then overlaid on the full frame.
ffmpeg -y -framerate 100 -i "N%3d.tif"
-filter_complex "[0]split=2[wm][vid];[wm]crop=8:ih,drawbox=color=random#1:t=fill,
hue=n*random(1234)[wm];[vid][wm]overlay,scale=1920:1080"
-c:v libx264 -crf 30 -g 10 -profile:v high -level 4.1 -pix_fmt yuv420p test.mp4
I'm creating a video that:
uses a still image as a source
has a text overlay
fades in and out
has a silent stereo audio track.
So far, I have this, and it (almost) works correctly:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
The only problem is that the fade out doesn't seem to be going to black, even tho this is a 150 frame video and I believe I am following the ffmpeg documentation correctly.
The resulting video is here:
http://video.blivenyc.com/vid-from-image/turtle11.mp4
Any thoughts?
Well, I'm not sure why but this works, even tho it appears to be equivalent:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=t=in:st=0:d=1,fade=t=out:st=4:d=1 -acodec aac turtle12.mp4
Basically, frame-based syntax:
fade=in:0:60,fade=out:90:60
gets substitued with time-based:
fade=t=in:st=0:d=1,fade=t=out:st=4:d=1
And somehow it works. Not sure why this is.
The video stream on which the fade filter operates is not 150 frames long. Input and output framerates are different here. The use of -r to set output rate happens after all filtering is done. At that stage, ffmpeg will drop or duplicate frames to obtain the output rate.
The input rate for an image or image sequence is 25, unless expressly set otherwise. In your command, since there is no override, it's 25. So fade out of 60 frames starting at frame 90, will end at frame 125 (5 seconds x 25). ffmpeg will duplicate 5 frames of each input second to get it to 30.
To get the desired result, use
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -framerate 30 -i turtle-2.jpg -c:v libx264 -t 5 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
Hi I am new in FFmpeg,
I have made video from slideshow of sequential images (img001.jpg, img002.jpg, img003.jpg....). Using following commands in Ubuntu 14.04
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -vf scale=320:240 out.mp4
But now I want to put animation like fade-in, fade-out between each sequential images, I want to generate video,
can anybody help me how to make it, i have searched lots of things but could not get....
The best way to do this is create intermediate mpeg's for each image and then concatenate them all into a video. For example, say you have 5 images; you would run this for each one of the images to create the intermediate mpeg's with a fade in at the beginning and a fade out at the end.
ffmpeg -y -loop 1 -i image -vf "fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5 -q:v 1 image-1.mpeg
where t is the duration, or time, of each image. Once you have all of these mpeg's, you use ffmpeg's concat command to combine them all into an mp4.
ffmpeg -y -i image-1.mpeg -i image-2.mpeg -i image-3.mpeg -i image-4.mpeg -i image-5.mpeg -filter_complex '[0:v][1:v][2:v][3:v][4:v] concat=n=5:v=1 [v]' -map '[v]' -c:v libx264 -s 1280x720 -aspect 16:9 -q:v 1 -pix_fmt yuv420p output.mp4
This gives you the desired video and is the simplest and highest quality solution with ffmpeg. Let me know if you have any questions about how the above command works.