I am using this code to combine 2 files together (overlay file over original file):
ffmpeg -r 60 \
-i originalfile.webm -i overlayfile.mov \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS[base]; \
[1:v]setpts=PTS-STARTPTS+0.5/TB, \
format=yuva420p,colorchannelmixer=aa=0.7[overlay]; \
[base][overlay]overlay=x=(W-w)/2:y=0[v]" -map "[v]" -map 0:a -c:a copy -c:v libvpx-vp9 -lossless 1 -threads 4 -quality realtime -speed 8 -tile-columns 6 -frame-parallel 1 -vsync 1 -shortest resultfile.webm
Encoding speed is not bad and quality output also, but after some time video picture could freeze for several seconds, then again it plays ok and then again could freeze.
How could I optimize this code to make fast speed with the highest possible quality as original file without picture freezing?
Thank you
To avoid retiming of the webm and to crop 10% of the overlay from top and bottom, run
ffmpeg \
-i originalfile.webm -i overlayfile.mov \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS[base]; \
[1:v]crop=iw:0.80*ih,setpts=PTS-STARTPTS+0.5/TB, \
format=yuva420p,colorchannelmixer=aa=0.7[overlay]; \
[base][overlay]overlay=x=(W-w)/2:y=0[v]" \
-map "[v]" -map 0:a -c:a copy -c:v libvpx-vp9 -lossless 1 -threads 4 -quality realtime \
-speed 8 -tile-columns 6 -frame-parallel 1 -vsync 2 -shortest resultfile.webm
The crop filter centers the crop window by default, so when cropping to 80%, the top and bottom 10% will get cut off.
Related
I have three videos: let's call them intro, recording and outro. My ultimate goal is to stitch them together like so:
Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.
I've figured out how to make the thing work correctly for intro + recording:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[rv][0:v]overlay[v];[0:a][ra]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
However I can't use the tpad trick for the outro because it would render black frames over everything.
I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]asetpts=PTS+26/TB[outa]; \
[rv][0:v]overlay[v4]; \
[0:a][ra]amix[a4]; \
[v4][outv]overlay[v]; \
[a4][outa]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach?
In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.
Thank you!
Use adelay for all audio adjustments. Perform all mixing in a single amix.
Set the outro overlay to start only at the correct timestamps.
Use
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[mainv]; \
[1:a]adelay=delays=10s:all=1[maina]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]adelay=delays=26s:all=1[outa]; \
[mainv][0:v]overlay=eof_action=pass[previd]; \
[previd][outv]overlay=enable='gte(t,26)'[v]; \
[maina][0:a][outa]amix=inputs=3[a]; \
-map "[v]" -map "[a]" \
-c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
-movflags +faststart \
out.mp4 -y
I got currently 2 different ffmpeg scripts which I want to combine. I do not have good ffmpeg experience and those codes are mostly googel code so please be patient with me
The first code is concating 3 videos:
ffmpeg -y -i "$vid1" -i "$fp" -i "$vid1" -filter_complex \
"[0:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; \
[1:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; \
[2:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; \
[0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; \
[1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; \
[2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; \
[v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[v][a]; \
[v]drawtext=text='example..':y=h-line_h-$h3:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-$hcentral:x=w/20*mod(t\,100):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-23:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]" \
-map "[v]" -map "[a]" -c:v libx264 -crf 22 -preset veryfast -c:a aac -movflags +faststart "$fp_dest"
The second code is overlay a background mp3 in endless loop to the created video from above. Its important to know that this code does overlap the audio of the video and does not replace it. In future I will lower the volume of the mp3 files to work as background music
ffmpeg -y -i "$fp_dest" -filter_complex "amovie=$audio:loop=0,asetpts=N/SR/TB[aud];[0:a][aud]amix[a]" -map 0:v -map '[a]' -c:v copy -c:a aac -b:a 256k -shortest ./test.mp4
So currently I got 2 steps which I want to combine into 1 step. Can you please help me to include the second code into the first one without change any logic of the code?
Use amix to mix the music and the concated audio. stream_loop is applied to the music to loop it.
ffmpeg -y -i "$vid1" -i "$fp" -i "$vid1" -stream_loop -1 -i "$audio" -filter_complex \
"[0:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; \
[1:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; \
[2:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; \
[0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; \
[1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; \
[2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; \
[v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[v][a]; \
[a][3]amix=duration=first[a]; \
[v]drawtext=text='example..':y=h-line_h-$h3:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-$hcentral:x=w/20*mod(t\,100):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
[v]drawtext=text='example..':y=h-line_h-23:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]" \
-map "[v]" -map "[a]" -c:v libx264 -crf 22 -preset veryfast -c:a aac -b:a 256k -movflags +faststart "$fp_dest"
Similar to this ffmpeg - convert image sequence to video with reversed order
But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.
Ideally I'd like to combine it with this Make an Alpha Mask video from PNG files
What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.
However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg
My current attempt was assuming 60 images. which makes vframes x2
ffmpeg -y -framerate 20 -f image2 -i \
running_gear/%04d.png -start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-filter_complex alphaextract[a]
-map 0:v -b:v 5M -crf 20 running_gear.webm
-map [a] -b:v 5M -crf 20 running_gear-alpha.web
Without the alpha masking I can get it working using
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-map "[v]" -b:v 5M -crf 20 running_gear.webm
With just the alpha masking I can do
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
-map [a] -b:v 5M -crf 20 alpha.webm
So I am trying to do it so the alpha mask is done at the same time.
Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren'py
Got it after a few trial and error. Not really my ultimate desire but still works.
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]split[v][av];[av]alphaextract[a]"
-map [v] -b:v 5M -crf 20 running_gear.webm
-map [a] -b:v 5M -crf 20 running_gear-alpha.webm
After checking some of the other filters (after learning about it from concat) I found hstack so the one that can put it side-by-side so it works better with Ren'Py is.
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]split[v][av];[av]alphaextract[a];[v][a]hstack[m]"
-map [m] -b:v 5M -crf 20 running_gear.webm
I managed to create a video from set of non-sequential images and attached an audio to it. Also I added a "Copyright" text on top right hand corner so that the text appears throughout the video. However, I would like that text to appear only on the last image. How should I change my code below to address this?
ffmpeg \
-thread_queue_size 512 -f image2 -pattern_type glob -framerate 1/3 \
-i '*.jpg' \
-i 'audio.mp3' \
-c:a aac -c:v libx264 \
-vf scale=640:480, format=yuv420p, drawtext="text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5" \
-preset medium \
video.mp4
Isolate the last image from the glob and then concat it:
ffmpeg \
-pattern_type glob -framerate 1/3 -i '*.jpg' -framerate 1/3 -loop 1 -t 5 -i last/img.jpg -i audio.mp3 \
-filter_complex \
"[0:v]scale=640:480,setsar=1[v0]; \
[1:v]scale=640:480,setsar=1,drawtext=text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5[v1]; \
[v0][v1]concat=n=2:v=1:a=0,fps=25,format=yuv420p[v]" \
-map "[v]" -map 2:a -c:v libx264 -c:a aac -shortest -movflags +faststart video.mp4
everyone!
I'm trying to add a splash screen to fade out after 2 seconds into a video using FFMPEG.
I'm using the following command:
ffmpeg -loop 1 -framerate 2 -t 2 -i image.png \
-i video.mp4 \
-filter_complex "[0:v]fade=t=in:st=0:d=0.500000,fade=t=out:st=4.500000:d=0.500000,setsar=1; \
[0:0] [1:0] concat=n=2:v=1:a=0" \
-c:v libx264 -crf 23 output.mp4
but it is generating a video whose duration is correct, but plays for just 2 seconds, exactly the splash screen duration.
Since I don't have very experience on FFMPEG and got this code from the internet I don't know where the problem is...
Use
ffmpeg -i video.mp4 -loop 1 -t 2 -i image.png \
-filter_complex \
"[1]fade=t=in:st=0:d=0.500000,fade=t=out:st=1.500000:d=0.500000,setsar=1[i]; \
[i][0]concat=n=2:v=1:a=0" \
-c:v libx264 -crf 23 output.mp4
The image should be the same resolution as the video. It will fade-in for 0.5 seconds, remain for 1 second, then fade out for 0.5 seconds.