Burning subtitles different resolutions - ffmpeg

I'm trying to create a live stream with three quality profiles with different resolutions (SD, HD and FullHD). The live stream has subtitles, and I have to burn them for compatibility reasons.
I know how to do it with one profile, but with many no idea.
ffmpeg -nostdin -loglevel error -hwaccel cuvid -deint 2 -drop_second_field 1 -surfaces 15 -c:v h264_cuvid -resize 1280x720 -y -i udp://xxx.xxx.xxx.xxx:xxxxx?pkt_size=1316\&buffer_size=409600\&fifo_size=1000000\&overrun_nonfatal=1 -filter_complex [i:0x2c6]hwdownload,format=nv12[base];[i:0x993]setpts=(2.5)/TB+PTS[subs];[subs]scale=1280:720[subtitle];[base][subtitle]overlay[v];[v]hwupload_cuda[v] -map [v] -c:v hevc_nvenc -preset llhq -rc vbr_hq -cq 23 -qp 23 -tier high -profile:v main10 -level 4.0 -b:v 2000k -maxrate 2400k -bufsize 1000k -map i:0x2bd -c:a libfdk_aac -ac 2 -b:a 64k -map i:0x2be -c:a libfdk_aac -ac 2 -b:a 64k -metadata:s:a:0 language=eng -metadata:s:a:1 language=spa -f mpegts -mpegts_flags resend_headers+pat_pmt_at_frames -mpegts_copyts 1 -pcr_period 40 udp://yyy.yyy.yyy.yyy:yyyy?ttl=31\?pkt_size=1316\&buffer_size=409600\&fifo_size=1000000\&overrun_nonfatal=1
Apparenlty, Iffmpeg doen't allow to use -vf filter with filter_complex.
I'm using ffmpeg 3.4, cuda 8.

Use
ffmpeg -nostdin -loglevel error -hwaccel cuvid -deint 2 -drop_second_field 1 -surfaces 15
-c:v h264_cuvid -y -i udp://xxx.xxx.xxx.xxx:xxxxx?pkt_size=1316\&buffer_size=409600\&fifo_size=1000000\&overrun_nonfatal=1
-filter_complex "[i:0x2c6]hwdownload,format=nv12,split=3[fhd][hd][sd];
[i:0x993]setpts=(2.5)/TB+PTS,split=3[subfhd][subhd][subsd];
[fhd]scale=1920:1080[fhd];
[hd]scale=1280:720[hd];
[sd]scale=960:540[sd];
[subfhd]scale=1920:1080[subfhd];
[subhd]scale=1280:720[subhd];
[subsd]scale=960:540[subsd];
[fhd][subfhd]overlay,hwupload_cuda[v-fhd];
[hd][subhd]overlay,hwupload_cuda[v-hd];
[sd][subsd]overlay,hwupload_cuda[v-sd]"
-map [v-fhd] -map [v-hd] -map [v-sd] -c:v hevc_nvenc -preset llhq -rc vbr_hq -cq 23 -qp 23 -tier high
-profile:v main10 -level 4.0 -b:v 2000k -maxrate 2400k -bufsize 1000k
-map i:0x2bd -map i:0x2be -c:a libfdk_aac -ac 2 -b:a 64k
-metadata:s:a:0 language=eng -metadata:s:a:1 language=spa
-f mpegts -mpegts_flags resend_headers+pat_pmt_at_frames -mpegts_copyts 1
-pcr_period 40 udp://yyy.yyy.yyy.yyy:yyyy?ttl=31\?pkt_size=1316\&buffer_size=409600\&fifo_size=1000000\&overrun_nonfatal=1
You'll have to adjust the video bitrates and buffer sizes as required, but this is the basic command template.

Related

FFmpeg filter complex audio

How do I add the aac to the filter_complex/split so the audio only is encoded once as the yadif?
ffmpeg -y -hwaccel cuvid -i test.mxf -filter_complex "[0:v]yadif=1,split=2[out1][out2]" -map "[out1]" -s 1920:1080 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -pix_fmt yuv420p -preset slow -rc vbr_hq -b:v 4.5M -map 0:1 -c:a aac -b:a 192k test2.mp4 -map "[out2]" -s 768:432 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -pix_fmt yuv420p -preset slow -rc vbr_hq -b:v 1.5M -map 0:1 -c:a aac -b:a 192k test3.mp4
Your video is being encoded twice which is unavoidable because you are outputting two different width x height. Your audio is the same for each output, so you can use the tee muxer to only encode audio once and put it in both outputs:
ffmpeg -hwaccel cuvid -i test.mxf -filter_complex "[0:v]yadif=1,format=yuv420p,split=2[vid1][vid2];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432]" -map "[1080]" -map "[432]" -map 0:a:0 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -c:a aac -b:a 192k -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4"

I trying to stream screen to multiple sources with ffmpeg and got error

My command
ffmpeg -thread_queue_size 1024 -f x11grab -draw_mouse 0 -video_size 1920x1080 -i :99.0+0,0 -f alsa -i pulse -channels 2 -c:a aac -b:a 160k -ar 44100 -threads 8 -c:v libx264 -x264-params nal-hrd=cbr -profile:v baseline -framerate 30 -level:v 4.2 -vf format=yuv420p -b:v 1000k -maxrate 1500k -minrate 1000k -bufsize 8000k -g 60 -preset ultrafast -tune zerolatency -f tee -flags +global_header -map 0:v -map 1:a "[f=flv:onfail=ignore]rtmp://stream1|[f=flv:onfail=ignore]rtmp://stream2"
Error in console:
[NULL # 0x55aaf9a54bc0] Unable to find a suitable output format for '"[f=flv:onfail=ignore]rtmp://strea1
[tee # 0x55aaf96f3200] Slave muxer #0 failed, aborting.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1 --
ffmpeg -thread_queue_size 1024 -f x11grab -draw_mouse 0 -video_size 1920x1080 -i :99.0+0,0 -f alsa -i pulse -channels 2 -c:a aac -b:a 160k -ar 44100 -threads 8 -c:v libx264 -x264-params nal-hrd=cbr -profile:v baseline -framerate 30 -level:v 4.2 -vf format=yuv420p -b:v 1000k -maxrate 2500k -minrate 800k -bufsize 8000k -g 60 -preset ultrafast -tune zerolatency -map 0:v -map 1:a -f tee -flags +global_header [f=flv:onfail=ignore:flvflags=no_duration_filesize]rtmp://stream1|[f=flv:onfail=ignore:flvflags=no_duration_filesize]rtmp://stream2
problem was with quotes

How to concatenate and output various video bitrates and a standalone audio file?

How to concatenate and output various video bitrates and a standalone audio file in ffmpeg?
My requirement is:
I have 4 input files.
Need to stitch all 4 files into single segment.
Need output with four different video bit rates: 500k, 800k 1000k 1500k
Along with that I need to extract only audio from stitched file.
So my output will be 4 different video bitrate + 1 audio only file.
tee muxer
Most efficient method is to use the tee muxer (more examples) to avoid unnecessarily encoding the audio for each output, but it is complicated to use:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][aud];[v]split=4[v0][v1][v2][v3]" -map "[v0]" -b:v:0 500k -map "[v1]" -b:v:1 800k -map "[v2]" -b:v:2 1000k -map "[v3]" -b:v:3 1500k -map "[aud]" -c:v libx264 -c:a aac -f tee "[select=\'v:0,aud\':movflags=faststart]500.mp4|[select=\'v:1,aud\':movflags=faststart]800.mp4|[select=\'v:2,aud\':movflags=faststart]1000.mp4|[select=\'v:3,aud\':movflags=faststart]1500.mp4|[select=aud:movflags=faststart]audio.m4a"
This example method doesn't perform two-passes which you should do when using your old school method of manually choosing the bitrate for non-streaming outputs. See FFmpeg Wiki: H.264.
simpler but less efficient method
You can do a much less complicated command but it will be less efficient because it will separately encode the audio for each output. Possibly worth the tradeoff of less complexity.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a];[v]split=4[v0][v1][v2][v3];[a]asplit=4[a0][a1][a2][a3]" -map "[v0]" -map "[a0]" -b:v 500k -movflags +faststart 500.mp4 -map "[v1]" -map "[a1]" -c:v libx264 -c:a aac -b:v 800k -movflags +faststart 800.mp4 -map "[v2]" -map "[a2]" -b:v 1000k -movflags +faststart 1000.mp4 -map "[v3]" -map "[a3]" -b:v 1500k -movflags +faststart 1500.mp4
But since you're wanting to target a specific bitrate you should perform two passes:
ffmpeg -y -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][1:v]concat=n=2:v=1:a=0[v];[v]split=4[v0][v1][v2][v3]" -map "[v0]" -b:v 500k -pass 1 -passlogfile 500 -f mp4 /dev/null -map "[v1]" -c:v libx264 -c:a aac -b:v 800k -pass 1 -passlogfile 800 -f mp4 /dev/null -map "[v2]" -c:v libx264 -c:a aac -b:v 1000k -pass 1 -passlogfile 1000 -f mp4 /dev/null -map "[v3]" -c:v libx264 -c:a aac -b:v 1500k -pass 1 -passlogfile 1500 -f mp4 /dev/null
ffmpeg -y -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a];[v]split=4[v0][v1][v2][v3];[a]asplit=5[a0][a1][a2][a3][a4]" -map "[v0]" -map "[a0]" -c:v libx264 -c:a aac -b:v 500k -pass 2 -passlogfile 500 -movflags +faststart 500.mp4 -map "[v1]" -map "[a1]" -c:v libx264 -c:a aac -b:v 800k -pass 2 -passlogfile 800 -movflags +faststart 800.mp4 -map "[v2]" -map "[a2]" -c:v libx264 -c:a aac -b:v 1000k -pass 2 -passlogfile 1000 -movflags +faststart 1000.mp4 -map "[v3]" -map "[a3]" -c:v libx264 -c:a aac -b:v 1500k -pass 2 -passlogfile 1500 -movflags +faststart 1500.mp4 -map "[a4]" -movflags +faststart audio.m4a
If you're using Windows replace /dev/null with NUL in the examples above.

FFMpeg combine two separate commands

I am running 2 seperate ffmpeg commands..
ffmpeg -i video.mp4 -vf scale=1024:768 -crf 0 output_video.mp4
ffmpeg -i output_video.mp4 -s 640x360 -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4
Is there a way I can do both of these commands in once go? Trying to avoid 2 encoding sessions reducing the quality
Label filter outputs and refer to them in the -map option:
ffmpeg -i video.mp4 -filter_complex "[0:v]scale=1024:768[v768];[0:v]scale=640:360[v360]"
-map "[v768]" -map 0:a -c:v libx264 -c:a copy -crf 0 output_video.mp4
-map "[v360]" -map 0:a -c:v libx264 -preset slow -b:v 650k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset fast -movflags +faststart -c:a libfdk_aac -b:a 128k -ac 2 out-low.mp4

What is difference between using ffmpeg GOP setting or x264opt keyint combination?

Here is a ffmpeg command:
ffmpeg -i input.mp4 -flags +global_header -c:v libx264 -vf scale="864x486",setsar=1:1,setdar=16:9 -profile:v main -level 31 -g 50 -keyint_min 50 -sc_threshold 0 -b:v 700k -pix_fmt yuv420p -c:a libfdk_aac -ar 44100 -ac 2 -b:a 128k output2.mp4
and here is the other one:
ffmpeg -i input.mp4 -flags +global_header -c:v libx264 -vf scale="864x486",setsar=1:1,setdar=16:9 -x264opts keyint=50:min-keyint=50:no-scenecut -profile:v main -level 31 -b:v 700k -pix_fmt yuv420p -c:a libfdk_aac -ar 44100 -ac 2 -b:a 128k output2.mp4
Are there any differences between using -g 50 -keyint_min 50 -sc_threshold 0 and -x264opts keyint=50:min-keyint=50:no-scenecut settings to gain constant keyframe interval?

Resources