Here are the images I have in my folder:
img001.png
img002.png
They are stored in c:/frames.
Because that my frames are not shown correctly, used the FPS filter video shown below (documentation from https://trac.ffmpeg.org/wiki/Slideshow):
ffmpeg -r 1/5 -i img%03d.png -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4
I tried running on my VLC media player but it still doesn't work. It only shows one image.
The last frame of an image sequence will only be shown for an instant. Use tpad filter to clone it once and then apply other filters like fps.
ffmpeg -framerate 1/5 -i img%03d.png -vf "tpad=stop=1:stop_mode=clone,fps=25" -pix_fmt yuv420p -c:v libx264 out.mp4
Related
Here I got a video which has the FPS 30, duration 10s, and has 300 frames. How could I turn the video to 25FPS without dropping frames.
I suppose the -r or fps=fps=25 is kind of resampling method or not working.
My commands are like:
ffmpeg -i input.flv -vf "scale=800:450, fps=25" output1.flv
or
ffmpeg -i intput.flv -filter:v fps=fps=25 -c:v libx264 -c:a copy -pix_fmt yuv420p -profile:v high -f mp4 -vf scale=800:450 output2.mp4
The result is that output1.flv dropped frames, and output2.mp4 didn't work.
If you're re-encoding the video stream, then
ffmpeg -r 25 -i input.flv ...
If there's audio, you'll have to adjust its tempo as well by adding
-af atempo=0.834
where 0.834 is 25/30.
I want to convert jpg images to mp4 video without resizing image(keep original images size and well formated video)
I have tried lot's of solutions of ffmpeg and imagemagic (links given below)but both crop images after converting in video format and i want a video from images with original image size.
Solution will be appreciated with ffmpeg or imagemagick. :)
slow ffmpeg's images per second when creating video from images
image to video ffmpegf
FFMPEG An Intermediate Guide/image sequence
How can I create a video file from a set of jpg images? [duplicate]
How to create a video from images with FFmpeg?
FFmpeg
Make video from still image sequence
Combining images with ImageMagick
Imagemagick.org
ffmpeg -framerate 1/5 -i na%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4
on large image(1600X1200) its execute successfully but not generate a smooth video.
on small image(300x168) its show error. i also try this command on small image
ffmpeg -framerate 1/5 -i abc%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4 -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2"
this work for me i use this in loop
ffmpeg -loop 1 -i na002.jpg -c:a copy -c:v libx264 -strict 1 -shortest -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" test.mp4
ffmpeg -y -i video.mp4 -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" out1.media
ffmpeg -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp0.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp1.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp2.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp3.mp4" -filter_complex "[0]setdar=16/9[a];[1]setdar=16/9[b];[2]setdar=16/9[c];[3]setdar=16/9[d];[a][0:a][b][1:a][c][2:a][d][3:a] concat=n=4:v=1:a=1[v][a]" -map "[outv]" -map "[outa]"
I'm creating a video that:
uses a still image as a source
has a text overlay
fades in and out
has a silent stereo audio track.
So far, I have this, and it (almost) works correctly:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
The only problem is that the fade out doesn't seem to be going to black, even tho this is a 150 frame video and I believe I am following the ffmpeg documentation correctly.
The resulting video is here:
http://video.blivenyc.com/vid-from-image/turtle11.mp4
Any thoughts?
Well, I'm not sure why but this works, even tho it appears to be equivalent:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=t=in:st=0:d=1,fade=t=out:st=4:d=1 -acodec aac turtle12.mp4
Basically, frame-based syntax:
fade=in:0:60,fade=out:90:60
gets substitued with time-based:
fade=t=in:st=0:d=1,fade=t=out:st=4:d=1
And somehow it works. Not sure why this is.
The video stream on which the fade filter operates is not 150 frames long. Input and output framerates are different here. The use of -r to set output rate happens after all filtering is done. At that stage, ffmpeg will drop or duplicate frames to obtain the output rate.
The input rate for an image or image sequence is 25, unless expressly set otherwise. In your command, since there is no override, it's 25. So fade out of 60 frames starting at frame 90, will end at frame 125 (5 seconds x 25). ffmpeg will duplicate 5 frames of each input second to get it to 30.
To get the desired result, use
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -framerate 30 -i turtle-2.jpg -c:v libx264 -t 5 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
Im trying to create an mp4 video from an mp3 and an image with ffmpeg. The video should be the size of 640x360 with black background and the image should be resized to fit in this dimensions and centred in the middle. The video's length must match the mp3's length.
Its basically a video creation for youtube from a song and an artwork.
For now i was able to achieve this with 3 steps:
resize image:
-i %image% -vf scale='if(gt(a,4/3),640,-1)':'if(gt(a,4/3),-1,360)' %resized_image%
create a music video with black background:
-f lavfi -i color=s=640x360 -i %audio_file% -c:v libx264 -s:v 640x360 -c:a aac -strict experimental -b:a 320k -shortest -pix_fmt yuv420p %video%
put the resized image centred in the video:
-i %video% -i %resized_image% -filter_complex "overlay=(W-w)/2:(H-h)/2" -codec:a copy %final_video%
Is it possible to achieve all this with one ffmpeg command ?
Single command would be
ffmpeg -loop 1 -i image -i audio
-vf scale='if(gt(a,4/3),640,-1)':'if(gt(a,4/3),-1,360)',pad=640:360:(ow-iw)/2:(oh-ih)/2,format=yuv420p
-c:v libx264 -c:a aac -b:a 320k -strict -2 -shortest final.mp4
Hi I am new in FFmpeg,
I have made video from slideshow of sequential images (img001.jpg, img002.jpg, img003.jpg....). Using following commands in Ubuntu 14.04
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -vf scale=320:240 out.mp4
But now I want to put animation like fade-in, fade-out between each sequential images, I want to generate video,
can anybody help me how to make it, i have searched lots of things but could not get....
The best way to do this is create intermediate mpeg's for each image and then concatenate them all into a video. For example, say you have 5 images; you would run this for each one of the images to create the intermediate mpeg's with a fade in at the beginning and a fade out at the end.
ffmpeg -y -loop 1 -i image -vf "fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5 -q:v 1 image-1.mpeg
where t is the duration, or time, of each image. Once you have all of these mpeg's, you use ffmpeg's concat command to combine them all into an mp4.
ffmpeg -y -i image-1.mpeg -i image-2.mpeg -i image-3.mpeg -i image-4.mpeg -i image-5.mpeg -filter_complex '[0:v][1:v][2:v][3:v][4:v] concat=n=5:v=1 [v]' -map '[v]' -c:v libx264 -s 1280x720 -aspect 16:9 -q:v 1 -pix_fmt yuv420p output.mp4
This gives you the desired video and is the simplest and highest quality solution with ffmpeg. Let me know if you have any questions about how the above command works.