I am using ffmpeg for overlaying a image on live stream. How can i scale accoring to the widht of my screen so that it fits completely - ffmpeg

I am scaling the output from complex filter to different standard resolutions using the -s flag but the result is that the video does not fit completely into my output screen. How can i scale the different outputs dynamically according to the screen. Here is my command.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here

Your output has to match the aspect ratio of the screen resolution. Only practical solution is to provide a best guess to match the most common screen size of your viewers. The others will have letterbox/pillarbox to fit the screen.

Related

I am using ffmpeg to overlay a image on top of a live stream using filter graphs, but when the input resolution changes, the overlay vanishes

I did my research and found that some filter graph options do not adapt to changing resolutions.
https://lists.ffmpeg.org/pipermail/libav-user/2012-October/002920.html
Here is the command which i am using. Whenever my input video changes from portrait to landscape, the overlay vanishes. I would really appreciate any help here.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -i $overlayUrl -filter_complex "[1][0]scale2ref=iw:ih[ovr][base];[base][ovr] overlay=0:0, split=4[a][b]" -async 1 -vsync -1 -map 0:a -map "[a]" -c:v libx264 -c:a aac -b:v 256k -b:a 32k -s 640x360 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_low -map 0:a -map "[b]" -c:v libx264 -c:a aac -b:v 768k -b:a 96k -s 640x480 -tune zerolatency -r 60 -preset veryfast -crf 23 -f flv rtmp://$rtmpoutput/$2_mid code here
FFmpeg reinitializes the filtergraph when input properties change. The image input is one frame long and has already been consumed.
Loop the image.
ffmpeg -i rtmp://127.0.0.1:1935/show/$2 -loop 1 -i $overlayUrl ...

FFMPEG -re Insurance

FFMPEG -re according to the ffmpeg docs:
Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss).
My ffmpeg stream command is:
ffmpeg -re -i https://www.example.com/video.mp4 -filter_complex tpad=start_duration=10:stop_duration=15:start_mode=add:color=black:stop_mode=add -af adelay=10000|10000 -maxrate 2M -crf 24 -bufsize 6000k -c:v libx264 -preset superfast -tune zerolatency -strict -2 -c:a aac -ar 44100 -attempt_recovery 1 -max_recovery_attempts 5 -drop_pkts_on_overflow 1 -f flv rtmp://live.example.com/123453
Except this does not always work, and sometimes I have livestreams ending early because ffmpeg is playing faster than the frame rate. Is there another command that can be used to ensure ffmpeg streams the video in real time?
Remove -re and add filter-based throttling.
ffmpeg -i https://www.example.com/video.mp4 -filter_complex tpad=start_duration=10:stop_duration=15:start_mode=add:color=black:stop_mode=add,fifo,realtime -af adelay=10000|10000,afifo,arealtime -maxrate 2M -crf 24 -bufsize 6000k -c:v libx264 -preset superfast -tune zerolatency -strict -2 -c:a aac -ar 44100 -attempt_recovery 1 -max_recovery_attempts 5 -drop_pkts_on_overflow 1 -f flv rtmp://live.example.com/123453

FFMPEG: Combine "Create video from images" + scale to x + add audio + overlay logo

I´m working on a webcam-project. It is for generating timelapse videos of sunset/sundown.
I´m using a raspberrypi to generate them with gphoto2 + DSLR.
At the end of the day the images should get to an video, with audio and an overlay logo.
And it should be scaled to 1920 pixel.
I got a nice solution an it worked.
Producing the timelapse video an scale it:
ffmpeg -y -framerate 25 -start_number 0000001 -i /var/www/html/webcam/2020-01-05_bilder/%7d.jpg -vf scale=1920:-1 -pix_fmt yuv420p /var/www/html/webcam/2020-01-05-tag-output-1920.mp4
Taking the output of (1) and add an overlay-logo, add audio
ffmpeg -y -i '/var/www/html/webcam/2020-01-05-tag-output-1920.mp4'
-i '/var/www/html/webcam-scripts/graphics/logo.png'
-i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3'
-shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)'
-c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2
'/var/www/html/webcam/2020-01-05-tag-1920.mp4
I tried to combine both actions, but I get an error:
ffmpeg -y -framerate 25 -start_number 0000001 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -vf scale=1920:-1 -pix_fmt yuv420p -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)' -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2 '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
Error: Filtergraph 'scale=720:-1' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
Isn`t it possible to combine these inputs and scale it? Or ... Where is my misunderstanding?
Don't mix -vf and -filter_complex. Do all filtering in one filtergraph.
ffmpeg -y -framerate 25 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -filter_complex '[0]scale=1920:-2[v0];[1][v0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01),format=yuv420p' -c:v libx264 -crf 18 -preset slow -c:a aac -shortest '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
No need for -strict -2. It does nothing for modern ffmpeg.
I replaced -pix_fmt yuv420p with format=yuv420p so it is more organized.
-start_number 0000001 is not needed because 1 is the default.

FFMpeg combine two separate commands

I am running 2 seperate ffmpeg commands..
ffmpeg -i video.mp4 -vf scale=1024:768 -crf 0 output_video.mp4
ffmpeg -i output_video.mp4 -s 640x360 -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4
Is there a way I can do both of these commands in once go? Trying to avoid 2 encoding sessions reducing the quality
Label filter outputs and refer to them in the -map option:
ffmpeg -i video.mp4 -filter_complex "[0:v]scale=1024:768[v768];[0:v]scale=640:360[v360]"
-map "[v768]" -map 0:a -c:v libx264 -c:a copy -crf 0 output_video.mp4
-map "[v360]" -map 0:a -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4

FFmpeg keep same video dimension if video height is less than x480

The following code im using for video converting:
ffmpeg -i input.mkv -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart -s hd480 output.mp4
I want to keep same video dimension for smaller videos. For example keep 640x360 dimension if video height is less than x480.
Is ffmpeg such option?
Use
ffmpeg -i input.mkv
-vf "scale=w='if(gt(ih,480),2*trunc(oh*a/2),iw)':h='if(gt(ih,480),480,ih)'"
-c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k -movflags +faststart output.mp4

Resources