The following code im using for video converting:
ffmpeg -i input.mkv -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart -s hd480 output.mp4
I want to keep same video dimension for smaller videos. For example keep 640x360 dimension if video height is less than x480.
Is ffmpeg such option?
Use
ffmpeg -i input.mkv
-vf "scale=w='if(gt(ih,480),2*trunc(oh*a/2),iw)':h='if(gt(ih,480),480,ih)'"
-c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart output.mp4
Related
I am scaling the output from complex filter to different standard resolutions using the -s flag but the result is that the video does not fit completely into my output screen. How can i scale the different outputs dynamically according to the screen. Here is my command.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here
Your output has to match the aspect ratio of the screen resolution. Only practical solution is to provide a best guess to match the most common screen size of your viewers. The others will have letterbox/pillarbox to fit the screen.
I did my research and found that some filter graph options do not adapt to changing resolutions.
https://lists.ffmpeg.org/pipermail/libav-user/2012-October/002920.html
Here is the command which i am using. Whenever my input video changes from portrait to landscape, the overlay vanishes. I would really appreciate any help here.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here
FFmpeg reinitializes the filtergraph when input properties change. The image input is one frame long and has already been consumed.
Loop the image.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -loop 1 -i $overlayUrl ...
I am using 2 different FFmpeg commands to add audio to a video:
This adds audio and replace video's existing audio:
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -map 0:v -map 1:a -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
It works fine.
The problem comes when I try to mix the new audio with the video's existing audio:
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
The video plays fine everywhere but on iOS. I've tried adding -profile:v main -level 3.1 and -profile:v baseline -level 3.1 but no luck either.
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -profile:v baseline -level 3.1 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
WHat do I need to do to make the output video play on iOS?
I used this to combine a JPG and MP3 into a video:
ffmpeg -loop 1 -i 1.jpg -i song.mp3 -strict -2 -c:v libx264 -tune stillimage -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4
I'm trying to change the .jpg to a .gif, but the gif should loop to the MP3 length
I was able to do it with
ffmpeg -i song.mp3 -ignore_loop 0 -i 2.gif -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -shortest -strict -2 -c:v libx264 -threads 4 -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4
this worked for me
The specs for the video format are the following:
Aspect Ratio: 1:1
H.264 video compression, high profile, square pixels, fixed frame rate, progressive scan
.mp4 container with leading mov atom, no edit lists
Audio: Stereo AAC audio compression, 128kbps +
Reading through posts and ffmpeg documentation I came up with the following (yeah, I run it on a Windows PC):
ffmpeg.exe -r 30 -i input.webm -vf scale=iw*sar:ih -c:v libx264 -preset slow -profile:v high -c:a aac -strict experimental -ar 44100 -aspect 1:1 output.mp4
But when the video is played within the app that asks for this specification, it only displays black moving pixels, all broken, but you an hear the audio.
I don't really know what else to change on the command, and I have no idea in regards to the ...with leading mov atom specification.
Thanks.
EDIT:
I've tried #Mulvya's answer:
ffmpeg.exe -i input.webm -vf scale=iw*sar:ih,setsar=1 -c:v libx264 -preset slow -profile:v high -pix_fmt yuv420p -r 30 -c:a aac -strict experimental -ar 44100 -ac 2 -b:a 128k -movflags +faststart output.mp4
But the effect is the same once given to the app:
This is the information that ffmpeg spews about the input.webm file:
Use
ffmpeg.exe -i input.webm -vf scale=iw*sar:ih,setsar=1 -c:v libx264 -preset slow -profile:v high -pix_fmt yuv420p -r 30 -c:a aac -strict experimental -ar 44100 -ac 2 -b:a 128k -movflags +faststart output.mp4
Depending on how strict the app is, you may need to check the precise framerate. Use -r 30000/1001 for 29.97. The -movflags +faststart moves the moov atom to the front of the file.
Based on info I found elsewhere, this seems to be what Instagram requires:
ffmpeg.exe -i input.webm -vf scale=640:640,setsar=1 -c:v libx264 -preset slow -profile:v main -level 3.1 -pix_fmt yuv420p -r 30000/1001 -c:a aac -strict experimental -ar 44100 -ac 1 -b:a 64k -t 15 -movflags +faststart output.mp4