Disable default subtitle track with ffmpeg - ffmpeg

I'm creating an MKV container with 4 different files:
video.mp4
audio_en.mp4
audio_es.mp4
subtitles.ass
For that I'm using the following ffmpeg script:
ffmpeg -i video.mp4 -i audio_es.mp4 -i audio_en.mp4 -i subtitles.ass \
-map 0:v -map 1:a -map 2:a -map 3:s \
-metadata:s:a:0 language=spa \
-metadata:s:a:1 language=eng \
-metadata:s:s:0 language=spa -disposition:s:0 -default \
-default -c:v copy -c:a copy -c:a copy -c:s copy result.mkv
The result.mkv looks awesome, everything works as expected except for one thing: subtitles are still set as the default track, so players like VLC shows them automatically. I've already tried plenty of different ways to avoid that to happen with the disposition flag but I cannot make it work.
How should I modify the script so that the MKV does not have the subtitles track marked as default?
Thanks in advance!

For Matroska (.mkv) output use the -default_mode option:
ffmpeg -i video.mp4 -i audio_es.mp4 -i audio_en.mp4 -i subtitles.ass \
-map 0:v -map 1:a -map 2:a -map 3:s \
-metadata:s:a:0 language=spa \
-metadata:s:a:1 language=eng \
-metadata:s:s:0 language=spa \
-default_mode infer_no_subs \
-c copy result.mkv
This option requires FFmpeg 4.3 or later, or use a build from the current git master branch.

When you don't want to update to FFmpeg 4.3 this option for deposit works for me:
-disposition:s:0 0
This option overrides the disposition copied from the input stream and delete it by setting it to 0.

Related

ffmpeg combine audio mix code into complex concate script

I got currently 2 different ffmpeg scripts which I want to combine. I do not have good ffmpeg experience and those codes are mostly googel code so please be patient with me
The first code is concating 3 videos:
ffmpeg -y -i "$vid1" -i "$fp" -i "$vid1" -filter_complex \
"[0:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; \
[1:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; \
[2:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; \
[0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; \
[1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; \
[2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; \
[v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[v][a]; \
[v]drawtext=text='example..':y=h-line_h-$h3:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-$hcentral:x=w/20*mod(t\,100):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-23:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]" \
-map "[v]" -map "[a]" -c:v libx264 -crf 22 -preset veryfast -c:a aac -movflags +faststart "$fp_dest"
The second code is overlay a background mp3 in endless loop to the created video from above. Its important to know that this code does overlap the audio of the video and does not replace it. In future I will lower the volume of the mp3 files to work as background music
ffmpeg -y -i "$fp_dest" -filter_complex "amovie=$audio:loop=0,asetpts=N/SR/TB[aud];[0:a][aud]amix[a]" -map 0:v -map '[a]' -c:v copy -c:a aac -b:a 256k -shortest ./test.mp4
So currently I got 2 steps which I want to combine into 1 step. Can you please help me to include the second code into the first one without change any logic of the code?
Use amix to mix the music and the concated audio. stream_loop is applied to the music to loop it.
ffmpeg -y -i "$vid1" -i "$fp" -i "$vid1" -stream_loop -1 -i "$audio" -filter_complex \
"[0:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; \
[1:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; \
[2:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; \
[0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; \
[1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; \
[2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; \
[v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[v][a]; \
[a][3]amix=duration=first[a]; \
[v]drawtext=text='example..':y=h-line_h-$h3:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-$hcentral:x=w/20*mod(t\,100):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-23:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]" \
-map "[v]" -map "[a]" -c:v libx264 -crf 22 -preset veryfast -c:a aac -b:a 256k -movflags +faststart "$fp_dest"

ffmpeg read from a file and apply filter_complex at once

I am feeding fls.txt into ffmpeg -i and applying concat and a speedup.
fls.txt
file 'input1.mp4'
file 'input2.mp4'
file 'input3.mp4'
The command in one go looks as follows:
ffmpeg -i fls.txt \
-filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a] concat=n=3:v=1:a=1 [v][a];\
[v]setpts=0.5*PTS[v1];[a]atempo=2,asetpts=N/SR/TB[a1]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4
The output is really weird and says something like a stream is not found. And it also looks like as if it's trying to understand the fls.txt itself and not its content as the parameters.
What am I doing wrong here and how can I correct it?
Also, it's a simplified example and I cannot write per hand 3 input file paths. I need it to be read from a file. I'm on windows 10 if that matters.
EDIT:
From doing the suggested edits and expanding the -filter_complex I get an error below.
ffmpeg -f concat -safe 0 -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[v1];[v1]setpts=0.5*PTS[v2];[0:a]atempo=2,asetpts=N/SR/TB[a1];[a1]atempo=2,asetpts=N/SR/TB[a2]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4 \
-c:v h264_nvenc -map "[v2]" -map "[a2]" x4.mp4
error:
Output with label 'v1' does not exist in any defined filter graph, or was already used elsewhere.
Stream specifier ':a' in filtergraph description … matches no streams.
To enable the concat demuxer you have to use -f concat before -i fls.txt.
ffmpeg -f concat -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[v1];[0:a]atempo=2,asetpts=N/SR/TB[a1]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4
Because you're attempting to use the concat demuxer there is no need for the concat filter as well, so you can simplify the command.
You may also have to use -safe 0 before -i which you can read about in the documentation.
Follow-up question: Output with label 'v1' does not exist in any defined filter graph, or was already used elsewhere
You can't reuse consumed filter output labels so this example avoids that:
ffmpeg -f concat -safe 0 -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[2xv];[0:v]setpts=PTS/4[4xv];[0:a]atempo=2,asetpts=N/SR/TB[2xa];[0:a]atempo=4,asetpts=N/SR/TB[4xa]" \
-c:v h264_nvenc -map "[2xv]" -map "[2xa]" x2.mp4 \
-c:v h264_nvenc -map "[4xv]" -map "[4xa]" x4.mp4

Specifying track title or language in MPEG DASH MANIFEST

I am creating a manifest to playback Adaptive WebM using DASH. Everything working pretty fine but I need language-name/track-name instead of bitrate. Is it supported? How can update/optimize to support such feature?
Manifest creation:
ffmpeg \
-f webm_dash_manifest -i webm240.webm \
-f webm_dash_manifest -i webm360.webm \
-f webm_dash_manifest -i webm480.webm \
-f webm_dash_manifest -i webm720.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-f webm_dash_manifest -i audio3.webm \
-f webm_dash_manifest -i audio4.webm \
-c copy -map 0 -map 1 -map 2 -map 3 -map 4 -map 5 -map 6 -map 7 \
-f webm_dash_manifest \
-adaptation_sets "id=0,streams=0,1,2,3 id=1,streams=4,5,6,7" \
manifest.mpd
Player audio track selection:
Finally, after changing a couple of DASH players and encoders, this is how I solved it.
The problem was not in manifest creation but in input file preparation. I added metadata to input files like below and it worked.
Tested in Shaka-player, works like charm.
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=hin audiohindi.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=tam audiotamil.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=kan audiokannada.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=tel audiotelugu.mp4
It uses ISO 639-2 language codes like: Wiki: ISO 639-2 language codes

combine two -filter_complex command together

Below, I have two ffmpeg commands (1, 2) to be combined into (3).
add sounds from 1.mp3 and 1.3gp into muted 1.mp4
code works without error:
ffmpeg -i 1.mp3 -i 1.3gp -i 1.mp4 \
-filter_complex "[1]adelay=640|640[s1];[0][s1]amix=2[mixout];" \
-map 2:v -map [mixout] -c:v copy result.mp4
add watermark to top-right of 1.mp4
code works without error:
ffmpeg -i 1.mp4 -i logo.png \
-filter_complex "overlay=x=main_w-overlay_w:y=1" \
result.mp4
combine above two commands into one
My code fails
ffmpeg -i 1.mp3 -i 1.3gp -i 1.mp4 -i logo.png \
-filter_complex "[1]adelay=640|640[s1];[0][s1]amix=2[mixout];[2:v][3]overlay=x=main_w-overlay_w:y=1[outv]" \
-map [outv] -map [mixout] -c:v copy result.mp4
What am I doing wrong here?
Use
ffmpeg -i 1.mp3 -i 1.3gp -i 1.mp4 -i logo.png \
-filter_complex "[1]adelay=640|640[s1];[0][s1]amix=2[mixout];
[2:v][3]overlay=x=main_w-overlay_w:y=1[outv]" \
-map [outv] -map [mixout] result.mp4
If you're filtering the video stream e.g. adding an overlay, then you can't copy that video stream.

Run FFMPEG multiple overlay commands in one command

I'm using ffmpeg to do more operation on one video
the operation that i want to do is add many text in difference time, audio and image.
i can do all of them but not in one command, Do all separately
any suggestions to do multiple text , overlay image and audio in one command
Thanks
To achieve the commands provided in comments in one execution, use
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Test Text'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4
To add more drawtext filters, insert them after the first drawtext filter e.g.
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Test Text', \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Text2'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4

Resources