I'm using ffmpeg to do more operation on one video
the operation that i want to do is add many text in difference time, audio and image.
i can do all of them but not in one command, Do all separately
any suggestions to do multiple text , overlay image and audio in one command
Thanks
To achieve the commands provided in comments in one execution, use
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4
To add more drawtext filters, insert them after the first drawtext filter e.g.
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text', \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Text2'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4
Related
I'm creating an MKV container with 4 different files:
video.mp4
audio_en.mp4
audio_es.mp4
subtitles.ass
For that I'm using the following ffmpeg script:
ffmpeg -i video.mp4 -i audio_es.mp4 -i audio_en.mp4 -i subtitles.ass \
-map 0:v -map 1:a -map 2:a -map 3:s \
-metadata:s:a:0 language=spa \
-metadata:s:a:1 language=eng \
-metadata:s:s:0 language=spa -disposition:s:0 -default \
-default -c:v copy -c:a copy -c:a copy -c:s copy result.mkv
The result.mkv looks awesome, everything works as expected except for one thing: subtitles are still set as the default track, so players like VLC shows them automatically. I've already tried plenty of different ways to avoid that to happen with the disposition flag but I cannot make it work.
How should I modify the script so that the MKV does not have the subtitles track marked as default?
Thanks in advance!
For Matroska (.mkv) output use the -default_mode option:
ffmpeg -i video.mp4 -i audio_es.mp4 -i audio_en.mp4 -i subtitles.ass \
-map 0:v -map 1:a -map 2:a -map 3:s \
-metadata:s:a:0 language=spa \
-metadata:s:a:1 language=eng \
-metadata:s:s:0 language=spa \
-default_mode infer_no_subs \
-c copy result.mkv
This option requires FFmpeg 4.3 or later, or use a build from the current git master branch.
When you don't want to update to FFmpeg 4.3 this option for deposit works for me:
-disposition:s:0 0
This option overrides the disposition copied from the input stream and delete it by setting it to 0.
I am feeding fls.txt into ffmpeg -i and applying concat and a speedup.
fls.txt
file 'input1.mp4'
file 'input2.mp4'
file 'input3.mp4'
The command in one go looks as follows:
ffmpeg -i fls.txt \
-filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a] concat=n=3:v=1:a=1 [v][a];\
[v]setpts=0.5*PTS[v1];[a]atempo=2,asetpts=N/SR/TB[a1]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4
The output is really weird and says something like a stream is not found. And it also looks like as if it's trying to understand the fls.txt itself and not its content as the parameters.
What am I doing wrong here and how can I correct it?
Also, it's a simplified example and I cannot write per hand 3 input file paths. I need it to be read from a file. I'm on windows 10 if that matters.
EDIT:
From doing the suggested edits and expanding the -filter_complex I get an error below.
ffmpeg -f concat -safe 0 -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[v1];[v1]setpts=0.5*PTS[v2];[0:a]atempo=2,asetpts=N/SR/TB[a1];[a1]atempo=2,asetpts=N/SR/TB[a2]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4 \
-c:v h264_nvenc -map "[v2]" -map "[a2]" x4.mp4
error:
Output with label 'v1' does not exist in any defined filter graph, or was already used elsewhere.
Stream specifier ':a' in filtergraph description … matches no streams.
To enable the concat demuxer you have to use -f concat before -i fls.txt.
ffmpeg -f concat -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[v1];[0:a]atempo=2,asetpts=N/SR/TB[a1]" \
-c:v h264_nvenc -map "[v1]" -map "[a1]" x2.mp4
Because you're attempting to use the concat demuxer there is no need for the concat filter as well, so you can simplify the command.
You may also have to use -safe 0 before -i which you can read about in the documentation.
Follow-up question: Output with label 'v1' does not exist in any defined filter graph, or was already used elsewhere
You can't reuse consumed filter output labels so this example avoids that:
ffmpeg -f concat -safe 0 -i fls.txt \
-filter_complex "[0:v]setpts=0.5*PTS[2xv];[0:v]setpts=PTS/4[4xv];[0:a]atempo=2,asetpts=N/SR/TB[2xa];[0:a]atempo=4,asetpts=N/SR/TB[4xa]" \
-c:v h264_nvenc -map "[2xv]" -map "[2xa]" x2.mp4 \
-c:v h264_nvenc -map "[4xv]" -map "[4xa]" x4.mp4
I managed to create a video from set of non-sequential images and attached an audio to it. Also I added a "Copyright" text on top right hand corner so that the text appears throughout the video. However, I would like that text to appear only on the last image. How should I change my code below to address this?
ffmpeg \
-thread_queue_size 512 -f image2 -pattern_type glob -framerate 1/3 \
-i '*.jpg' \
-i 'audio.mp3' \
-c:a aac -c:v libx264 \
-vf scale=640:480, format=yuv420p, drawtext="text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5" \
-preset medium \
video.mp4
Isolate the last image from the glob and then concat it:
ffmpeg \
-pattern_type glob -framerate 1/3 -i '*.jpg' -framerate 1/3 -loop 1 -t 5 -i last/img.jpg -i audio.mp3 \
-filter_complex \
"[0:v]scale=640:480,setsar=1[v0]; \
[1:v]scale=640:480,setsar=1,drawtext=text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5[v1]; \
[v0][v1]concat=n=2:v=1:a=0,fps=25,format=yuv420p[v]" \
-map "[v]" -map 2:a -c:v libx264 -c:a aac -shortest -movflags +faststart video.mp4
I have 1080p webm video and 500x300 mp4 video. How could I place muted mp4 video on top-center position of webm video with transparency? The output file format needed ".webm". Here what similar code I found, but it uses two mp4 videos and second video scales full width on front of first one:
ffmpeg \
-i in1.mp4 -i in2.mp4 \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS, scale=480x360[top]; \
[1:v]setpts=PTS-STARTPTS, scale=480x360, \
format=yuva420p,colorchannelmixer=aa=0.5[bottom]; \
[top][bottom]overlay=shortest=1" \
-vcodec libx264 out.mp4
Output log:
Use
ffmpeg \
-i in1.webm -i in2.mp4 \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS[base]; \
[1:v]setpts=PTS-STARTPTS, \
format=yuva420p,colorchannelmixer=aa=0.5[overlay]; \
[base][overlay]overlay=x=(W-w)/2:y=0[v]"
-map "[v]" -map 0:a -c:a copy -shortest out.webm
The output file won't have the input webm's transparency but it can be done if required.
I have this command to generate a slideshow with zoompan from a list of images, but it applies the same zoompan to all pictures.
ffmpeg -r 1/5 -i img%03d.jpg -i 1.mp3 -c:a aac -c:v libx264 -r 25 -pix_fmt yuv420p -vf "zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':d=100" out.mp4
How can I get it to have different zoompan parameters for each image?
Input each image individually and provide a separate zoompan per image. Then concatenate with the concat filter.
ffmpeg \
-i img001.jpg \
-i img002.jpg \
-i img003.jpg \
-i audio.mp3 \
-filter_complex \
"[0:v]zoompan[v0]; \
[1:v]zoompan[v1]; \
[2:v]zoompan[v2]; \
[v0][v1][v2]concat=n=3:v=1:a=0,format=yuv420p[v]" \
-map "[v]" -map 3:a -shortest out.mp4
You'll need to adapt this example to use whatever zoompan values you want.