I have a string that outputs 4x mp4.
I would like to add quiet audio to the outputs.
I have tried to insert anullsrc=cl=mono:sample_rate=48000 but don't really know where to insert it. It gives me an error.
ffmpeg -hwaccel_output_format cuda -i test.mxf -filter_complex "[0:v]yadif=1,format=yuv420p,split=4[vid1][vid2][vid3][vid4];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432];[vid3]scale=-2:288[288];[vid4]scale=-2:216[216]" -map "[1080]" -map "[432]" -map "[288]" -map "[216]" -map 0:a:0 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -b:v:2 1.0M -b:v:3 0.5M -c:a aac -b:a 192k -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4|[select=\'v:2,a\']288.mp4|[select=\'v:3,a\']216.mp4"
You would add the anullsrc as a lavfi input and then map it.
You then either have to add -shortest or add -t X where X is the duration of the video.
ffmpeg -hwaccel_output_format cuda -i test.mxf -f lavfi -i "anullsrc=cl=mono:sample_rate=48000" -filter_complex "[0:v]yadif=1,format=yuv420p,split=4[vid1][vid2][vid3][vid4];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432];[vid3]scale=-2:288[288];[vid4]scale=-2:216[216]" -map "[1080]" -map "[432]" -map "[288]" -map "[216]" -map 0:a:0? -map 1:a -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -b:v:2 1.0M -b:v:3 0.5M -c:a aac -b:a 192k -shortest -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4|[select=\'v:2,a\']288.mp4|[select=\'v:3,a\']216.mp4"
Related
How do I add the aac to the filter_complex/split so the audio only is encoded once as the yadif?
ffmpeg -y -hwaccel cuvid -i test.mxf -filter_complex "[0:v]yadif=1,split=2[out1][out2]" -map "[out1]" -s 1920:1080 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -pix_fmt yuv420p -preset slow -rc vbr_hq -b:v 4.5M -map 0:1 -c:a aac -b:a 192k test2.mp4 -map "[out2]" -s 768:432 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -pix_fmt yuv420p -preset slow -rc vbr_hq -b:v 1.5M -map 0:1 -c:a aac -b:a 192k test3.mp4
Your video is being encoded twice which is unavoidable because you are outputting two different width x height. Your audio is the same for each output, so you can use the tee muxer to only encode audio once and put it in both outputs:
ffmpeg -hwaccel cuvid -i test.mxf -filter_complex "[0:v]yadif=1,format=yuv420p,split=2[vid1][vid2];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432]" -map "[1080]" -map "[432]" -map 0:a:0 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -c:a aac -b:a 192k -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4"
I would like to create several output videos with different resolutions but the same audio. Afaik audio encoding is an output option.
ffmpeg \
-hwaccel qsv -c:v h264_qsv \
-i <input> \
-filter_complex '[0:a]aformat=channel_layouts=stereo,aresample=async=1,asplit=3[a1][a2][a3];[0:v]vpp_qsv=detail=50:framerate=25,split=3[v1][v2][v3];[v2]vpp_qsv=width=1280[v2o];[v3]vpp_qsv=width=800[v3o]' \
-c:v h264_qsv -c:a aac -b:a 96k -map '[v1]' -map '[a1]' <output> \
-c:v h264_qsv -c:a aac -b:a 96k -map '[v2o]' -map '[a2]' <output> \
-c:v h264_qsv -c:a aac -b:a 96k -map '[v3o]' -map '[a3]' <output>
Above I have two redundant audio encodings.
How can I encode the audio just once and copy it for the different outputs?
Use the tee muxer:
ffmpeg \
-hwaccel qsv -c:v h264_qsv -i <input> \
-filter_complex '[0:a]aformat=channel_layouts=stereo,aresample=async=1[a];[0:v]vpp_qsv=detail=50:framerate=25,split=3[v1][v2][v3];[v2]vpp_qsv=width=1280[v2o];[v3]vpp_qsv=width=800[v3o]' \
-map '[v1]' -map '[v2o]' -map '[v3o]' -map '[a]' \
-c:v h264_qsv -c:a aac -b:a 96k -f tee -flags +global_header \
"[select=\'v:0,a\']output.mkv|[select=\'v:1,a\':f=flv:onfail=ignore]rtmp://server0/app/instance/playpath|[select=\'v:2,a\':movflags=+faststart]output.mp4"
I found a workaround with some overhead for muxing and de-muxing but savings in the long run:
ffmpeg -i <input> \
-af 'aformat=channel_layouts=stereo,aresample=async=1' \
-c:a libopus -b:a 64k -ar 48k \
-c:v copy \
-f mpegts - | \
ffmpeg \
-hwaccel qsv -c:v h264_qsv \
-f mpegts -i - \
-filter_complex '[0:v]vpp_qsv=detail=50:framerate=25,split=3[v][v2][v3];[v2]vpp_qsv=width=1280[720p];[v3]vpp_qsv=width=800[450p]' \
-map '[v]' -map 0:a -c:v h264_qsv -c:a copy <output> \
-map '[720p]' -map 0:a -c:v h264_qsv -c:a copy <output> \
-map '[450p]' -map 0:a -c:v h264_qsv -c:a copy <output>
The script I use to add a logo:
ffmpeg -i input.mp4 -framerate 30000/1001 -loop 1 -i test.png \
-filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; \
[0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a \
-c:v libx264 -c:a copy -shortest output.mp4
The command I use to convert video. (With this command, synchronize your webm and mp4 and get the picture.)
ffmpeg -i input.wmv -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis \
outputwebm.webm -c:v libx264 -crf 35 outputmp4.mp4 \
-vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
I want to add the logo image as synchronous.
The command I tried:
ffmpeg -i input.wmv -c:v libvpx -crf 10 -b:v 1M \
-c:a libvorbis outputwebm.webm -c:v libx264 \
-crf 35 -framerate 30000/1001 -loop 1 -i test.png \
-filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; \
[0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a \
-c:v libx264 -c:a copy -shortest outputmp4.mp4 \
-vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
Result:
Group all inputs at the front of the command and remove the encoding for the temp MP4 file.
ffmpeg -i input.wmv -framerate 30000/1001 -loop 1 -i test.png -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis outputwebm.webm -filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; [0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a -c:v libx264 -c:a copy -shortest outputmp4.mp4 -vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
If your PNG has greater resolution than the WMV then you'll need to map the video for the webm and png outputs.
How to concatenate and output various video bitrates and a standalone audio file in ffmpeg?
My requirement is:
I have 4 input files.
Need to stitch all 4 files into single segment.
Need output with four different video bit rates: 500k, 800k 1000k 1500k
Along with that I need to extract only audio from stitched file.
So my output will be 4 different video bitrate + 1 audio only file.
tee muxer
Most efficient method is to use the tee muxer (more examples) to avoid unnecessarily encoding the audio for each output, but it is complicated to use:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][aud];[v]split=4[v0][v1][v2][v3]" -map "[v0]" -b:v:0 500k -map "[v1]" -b:v:1 800k -map "[v2]" -b:v:2 1000k -map "[v3]" -b:v:3 1500k -map "[aud]" -c:v libx264 -c:a aac -f tee "[select=\'v:0,aud\':movflags=faststart]500.mp4|[select=\'v:1,aud\':movflags=faststart]800.mp4|[select=\'v:2,aud\':movflags=faststart]1000.mp4|[select=\'v:3,aud\':movflags=faststart]1500.mp4|[select=aud:movflags=faststart]audio.m4a"
This example method doesn't perform two-passes which you should do when using your old school method of manually choosing the bitrate for non-streaming outputs. See FFmpeg Wiki: H.264.
simpler but less efficient method
You can do a much less complicated command but it will be less efficient because it will separately encode the audio for each output. Possibly worth the tradeoff of less complexity.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a];[v]split=4[v0][v1][v2][v3];[a]asplit=4[a0][a1][a2][a3]" -map "[v0]" -map "[a0]" -b:v 500k -movflags +faststart 500.mp4 -map "[v1]" -map "[a1]" -c:v libx264 -c:a aac -b:v 800k -movflags +faststart 800.mp4 -map "[v2]" -map "[a2]" -b:v 1000k -movflags +faststart 1000.mp4 -map "[v3]" -map "[a3]" -b:v 1500k -movflags +faststart 1500.mp4
But since you're wanting to target a specific bitrate you should perform two passes:
ffmpeg -y -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][1:v]concat=n=2:v=1:a=0[v];[v]split=4[v0][v1][v2][v3]" -map "[v0]" -b:v 500k -pass 1 -passlogfile 500 -f mp4 /dev/null -map "[v1]" -c:v libx264 -c:a aac -b:v 800k -pass 1 -passlogfile 800 -f mp4 /dev/null -map "[v2]" -c:v libx264 -c:a aac -b:v 1000k -pass 1 -passlogfile 1000 -f mp4 /dev/null -map "[v3]" -c:v libx264 -c:a aac -b:v 1500k -pass 1 -passlogfile 1500 -f mp4 /dev/null
ffmpeg -y -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a];[v]split=4[v0][v1][v2][v3];[a]asplit=5[a0][a1][a2][a3][a4]" -map "[v0]" -map "[a0]" -c:v libx264 -c:a aac -b:v 500k -pass 2 -passlogfile 500 -movflags +faststart 500.mp4 -map "[v1]" -map "[a1]" -c:v libx264 -c:a aac -b:v 800k -pass 2 -passlogfile 800 -movflags +faststart 800.mp4 -map "[v2]" -map "[a2]" -c:v libx264 -c:a aac -b:v 1000k -pass 2 -passlogfile 1000 -movflags +faststart 1000.mp4 -map "[v3]" -map "[a3]" -c:v libx264 -c:a aac -b:v 1500k -pass 2 -passlogfile 1500 -movflags +faststart 1500.mp4 -map "[a4]" -movflags +faststart audio.m4a
If you're using Windows replace /dev/null with NUL in the examples above.
I want to make the different qualities from video in one command.
I used this below code.
But there is an issue,and it's that the output files not have details
ffmpeg -i input.mp4 -filter_complex [0:v]format=yuv420p,split=2[s0][s1];
[s0]scale=hd480[v0];
[s1]scale=nhd[v1]
-map [v0] -map [v1] -map 0:a? -c:v libx264 -c:a aac -f tee -threads 0
"[select='v\:0,a':f=mp4]1/480.mp4|[select='v\:1,a':f=mp4]1/360.mp4"
What I must be do?
With the guidances and helps of #Mulvya the answer is like this :
ffmpeg -i input.mp4 -filter_complex [0:v]format=yuv420p,split=2[s0][s1];
[s0]scale=hd480[v0];
[s1]scale=nhd[v1]
-map [v0] -map [v1] -map 0:a? -c:v libx264 -c:a aac -f tee -flags +global_header -threads 0
"[select='v\:0,a':f=mp4]1/480.mp4|[select='v\:1,a':f=mp4]1/360.mp4"