I am using ffmpeg to overlay a image on top of a live stream using filter graphs, but when the input resolution changes, the overlay vanishes - ffmpeg

I did my research and found that some filter graph options do not adapt to changing resolutions.
https://lists.ffmpeg.org/pipermail/libav-user/2012-October/002920.html
Here is the command which i am using. Whenever my input video changes from portrait to landscape, the overlay vanishes. I would really appreciate any help here.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here

FFmpeg reinitializes the filtergraph when input properties change. The image input is one frame long and has already been consumed.
Loop the image.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -loop 1 -i $overlayUrl ...

Related

I am using ffmpeg for overlaying a image on live stream. How can i scale accoring to the widht of my screen so that it fits completely

I am scaling the output from complex filter to different standard resolutions using the -s flag but the result is that the video does not fit completely into my output screen. How can i scale the different outputs dynamically according to the screen. Here is my command.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here
Your output has to match the aspect ratio of the screen resolution. Only practical solution is to provide a best guess to match the most common screen size of your viewers. The others will have letterbox/pillarbox to fit the screen.

FFMPEG: Combine "Create video from images" + scale to x + add audio + overlay logo

I´m working on a webcam-project. It is for generating timelapse videos of sunset/sundown.
I´m using a raspberrypi to generate them with gphoto2 + DSLR.
At the end of the day the images should get to an video, with audio and an overlay logo.
And it should be scaled to 1920 pixel.
I got a nice solution an it worked.
Producing the timelapse video an scale it:
ffmpeg -y -framerate 25 -start_number 0000001 -i /var/www/html/webcam/2020-01-05_bilder/%7d.jpg -vf scale=1920:-1 -pix_fmt yuv420p /var/www/html/webcam/2020-01-05-tag-output-1920.mp4
Taking the output of (1) and add an overlay-logo, add audio
ffmpeg -y -i '/var/www/html/webcam/2020-01-05-tag-output-1920.mp4'
-i '/var/www/html/webcam-scripts/graphics/logo.png'
-i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3'
-shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)'
-c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2
'/var/www/html/webcam/2020-01-05-tag-1920.mp4
I tried to combine both actions, but I get an error:
ffmpeg -y -framerate 25 -start_number 0000001 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -vf scale=1920:-1 -pix_fmt yuv420p -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)' -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2 '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
Error: Filtergraph 'scale=720:-1' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
Isn`t it possible to combine these inputs and scale it? Or ... Where is my misunderstanding?
Don't mix -vf and -filter_complex. Do all filtering in one filtergraph.
ffmpeg -y -framerate 25 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -filter_complex '[0]scale=1920:-2[v0];[1][v0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01),format=yuv420p' -c:v libx264 -crf 18 -preset slow -c:a aac -shortest '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
No need for -strict -2. It does nothing for modern ffmpeg.
I replaced -pix_fmt yuv420p with format=yuv420p so it is more organized.
-start_number 0000001 is not needed because 1 is the default.

FFmpeg using complex filter amerge doesn't play on iOS

I am using 2 different FFmpeg commands to add audio to a video:
This adds audio and replace video's existing audio:
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -map 0:v -map 1:a -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
It works fine.
The problem comes when I try to mix the new audio with the video's existing audio:
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
The video plays fine everywhere but on iOS. I've tried adding -profile:v main -level 3.1 and -profile:v baseline -level 3.1 but no luck either.
ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -profile:v baseline -level 3.1 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"
WHat do I need to do to make the output video play on iOS?

FFMpeg combine two separate commands

I am running 2 seperate ffmpeg commands..
ffmpeg -i video.mp4 -vf scale=1024:768 -crf 0 output_video.mp4
ffmpeg -i output_video.mp4 -s 640x360 -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4
Is there a way I can do both of these commands in once go? Trying to avoid 2 encoding sessions reducing the quality
Label filter outputs and refer to them in the -map option:
ffmpeg -i video.mp4 -filter_complex "[0:v]scale=1024:768[v768];[0:v]scale=640:360[v360]"
-map "[v768]" -map 0:a -c:v libx264 -c:a copy -crf 0 output_video.mp4
-map "[v360]" -map 0:a -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4

FFmpeg keep same video dimension if video height is less than x480

The following code im using for video converting:
ffmpeg -i input.mkv -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart -s hd480 output.mp4
I want to keep same video dimension for smaller videos. For example keep 640x360 dimension if video height is less than x480.
Is ffmpeg such option?
Use
ffmpeg -i input.mkv
-vf "scale=w='if(gt(ih,480),2*trunc(oh*a/2),iw)':h='if(gt(ih,480),480,ih)'"
-c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart output.mp4

Resources