How does FFMPEG change fps without dropping frames? - ffmpeg

Here I got a video which has the FPS 30, duration 10s, and has 300 frames. How could I turn the video to 25FPS without dropping frames.
I suppose the -r or fps=fps=25 is kind of resampling method or not working.
My commands are like:
ffmpeg -i input.flv -vf "scale=800:450, fps=25" output1.flv
or
ffmpeg -i intput.flv -filter:v fps=fps=25 -c:v libx264 -c:a copy -pix_fmt yuv420p -profile:v high -f mp4 -vf scale=800:450 output2.mp4
The result is that output1.flv dropped frames, and output2.mp4 didn't work.

If you're re-encoding the video stream, then
ffmpeg -r 25 -i input.flv ...
If there's audio, you'll have to adjust its tempo as well by adding
-af atempo=0.834
where 0.834 is 25/30.

Related

ffmpeg only shows one Image?

Here are the images I have in my folder:
img001.png
img002.png
They are stored in c:/frames.
Because that my frames are not shown correctly, used the FPS filter video shown below (documentation from https://trac.ffmpeg.org/wiki/Slideshow):
ffmpeg -r 1/5 -i img%03d.png -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4
I tried running on my VLC media player but it still doesn't work. It only shows one image.
The last frame of an image sequence will only be shown for an instant. Use tpad filter to clone it once and then apply other filters like fps.
ffmpeg -framerate 1/5 -i img%03d.png -vf "tpad=stop=1:stop_mode=clone,fps=25" -pix_fmt yuv420p -c:v libx264 out.mp4

Make video from images with ffmpeg/imagemagic

I want to convert jpg images to mp4 video without resizing image(keep original images size and well formated video)
I have tried lot's of solutions of ffmpeg and imagemagic (links given below)but both crop images after converting in video format and i want a video from images with original image size.
Solution will be appreciated with ffmpeg or imagemagick. :)
slow ffmpeg's images per second when creating video from images
image to video ffmpegf
FFMPEG An Intermediate Guide/image sequence
How can I create a video file from a set of jpg images? [duplicate]
How to create a video from images with FFmpeg?
FFmpeg
Make video from still image sequence
Combining images with ImageMagick
Imagemagick.org
ffmpeg -framerate 1/5 -i na%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4
on large image(1600X1200) its execute successfully but not generate a smooth video.
on small image(300x168) its show error. i also try this command on small image
ffmpeg -framerate 1/5 -i abc%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4 -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2"
this work for me i use this in loop
ffmpeg -loop 1 -i na002.jpg -c:a copy -c:v libx264 -strict 1 -shortest -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" test.mp4
ffmpeg -y -i video.mp4 -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" out1.media
ffmpeg -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp0.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp1.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp2.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp3.mp4" -filter_complex "[0]setdar=16/9[a];[1]setdar=16/9[b];[2]setdar=16/9[c];[3]setdar=16/9[d];[a][0:a][b][1:a][c][2:a][d][3:a] concat=n=4:v=1:a=1[v][a]" -map "[outv]" -map "[outa]"

Fade out in ffmpeg when creating a video from a still image is wonky?

I'm creating a video that:
uses a still image as a source
has a text overlay
fades in and out
has a silent stereo audio track.
So far, I have this, and it (almost) works correctly:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
The only problem is that the fade out doesn't seem to be going to black, even tho this is a 150 frame video and I believe I am following the ffmpeg documentation correctly.
The resulting video is here:
http://video.blivenyc.com/vid-from-image/turtle11.mp4
Any thoughts?
Well, I'm not sure why but this works, even tho it appears to be equivalent:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=t=in:st=0:d=1,fade=t=out:st=4:d=1 -acodec aac turtle12.mp4
Basically, frame-based syntax:
fade=in:0:60,fade=out:90:60
gets substitued with time-based:
fade=t=in:st=0:d=1,fade=t=out:st=4:d=1
And somehow it works. Not sure why this is.
The video stream on which the fade filter operates is not 150 frames long. Input and output framerates are different here. The use of -r to set output rate happens after all filtering is done. At that stage, ffmpeg will drop or duplicate frames to obtain the output rate.
The input rate for an image or image sequence is 25, unless expressly set otherwise. In your command, since there is no override, it's 25. So fade out of 60 frames starting at frame 90, will end at frame 125 (5 seconds x 25). ffmpeg will duplicate 5 frames of each input second to get it to 30.
To get the desired result, use
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -framerate 30 -i turtle-2.jpg -c:v libx264 -t 5 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4

ffmpeg rtmp stream taking 100% CPU

I am creating a small script to stream a images on rtmp server but FFMPEG command taking 100% CPU. Please have a look on my code.
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -loop 1 -i "Digital-Wallet-.jpg" -t 00:30:00 -r 1 -c:v libx264 -c:a aac -preset:v ultrafast -pix_fmt yuv420p -f flv "rtmp://rtmpserver"
Encoding is CPU intensive. Remove -r 1 and add -framerate 1, -re, and -shortest.
ffmpeg -f lavfi -i anullsrc -loop 1 -framerate 1 -re -i "Digital-Wallet-.jpg" -t 00:30:00 -c:v libx264 -c:a aac -preset:v ultrafast -pix_fmt yuv420p -shortest -f flv "rtmp://rtmpserver"
The default image demuxer frame rate is 25, so your command was unnecessarily converting 25 frames per second to 1 frame per second which is inefficient. The above changes fixes that.
-re will slow down the reading of the input to the native frame rate of the input. It is useful for real-time output and live streaming. Otherwise ffmpeg will attempt to encode as fast as possible.
I added -shortest to end the output when the shortest stream ends (the image) because anullsrc was set to encode indefinitely.

encoding jpeg as h264 video

I am using the following command to encode an AVI to an H264 video for use in an HTML5 video tag:
ffmpeg -y -i "test.avi" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this works just fine. But I also want to create a placeholder video (long story) from a single still image, so I do this:
ffmpeg -y -i "test.jpg" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this doesn't work. What gives?
EDIT: After trying LordNeckbeards answer, here is my full output: http://pastebin.com/axhKpkLx
Example for a 10 second output:
ffmpeg -loop 1 -framerate 24 -i input.jpg -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -t 10 -movflags +faststart output.mp4
Same thing but with audio. The output duration will match the input audio duration:
ffmpeg -loop 1 -framerate 24 -i input.jpg -i audio.mp3 -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -c:a aac -shortest -movflags +faststart output.mp4
-loop 1 loops the image input.
-framerate sets the frame rate of the image input. Default is 25. Some players have issues with low frame rates so a value over 6 or so is recommended.
-i input.jpg the input.
-c:v libx264 the H.264 video encoder.
-preset x264 encoding preset. Use the slowest one you can.
-tune x264 tuning for various adjustments to fit specific situations.
-crf for quality. A lower value results in higher quality. Use the highest value that still provides an acceptable quality to you. Default is 23.
-vf format=yuv420p outputs the pixel format as yuv420p. This ensures the output uses a widely acceptable chroma sub-sampling scheme. Recommended for libx264 when encoding from images.
-c:a aac the AAC audio encoder. If your input is already AAC or M4A then use -c:a copy instead to stream copy instead of re-encode.
-t 10 (in the first example) makes a 10 second output. Needed because the image is looping indefinitely.
-shortest (in the second example) makes the output the same duration as the shortest input. In this case it is the audio since the image is looping indefinitely.
-movflags +faststart relocates the moov atom to the beginning of the file after encoding is finished. Allows playback to begin faster in progressive download playing; otherwise the whole video must be downloaded before playing.
-profile:v main (optional) some devices can't handle High profile.
See FFmpeg Wiki: H.264 for more info.

Resources