This question already has answers here:
FFmpeg command for crossfading between 5 videos .How to manage setpts=PTS-STARTPTS?
(2 answers)
Closed 3 years ago.
Im trying to join 3 videos together with a crossfade effect.
I can get this working for 2 videos (sourced from stackoverflow but cant find the link):
ffmpeg -y -i part1.mp4 -i part2.mp4 -f lavfi -i color=black:s=1920x1080 -filter_complex \
"[0:v]format=pix_fmts=yuva420p,fade=t=out:st=10:d=1:alpha=1,setpts=PTS-STARTPTS[va0]; \
[1:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+10/TB[va1]; \
[2:v]trim=duration=20[over]; \
[over][va0]overlay[over1]; \
[over1][va1]overlay=format=yuv420[outv]" \
-vcodec libx264 -map [outv] merged.mp4
But cant work out how to make this work for 3 videos.
I don't need any audio. Any ideas?
Cheers,
ffmpeg-concat is the easiest way to accomplish what you want and allows you to use a bunch of sexy OpenGL transitions, with the default being crossfade.
ffmpeg-concat 0.mp4 1.mp4 2.mp4 -o out.mp4
ffmpeg-gl-transition is a more complicated custom ffmpeg filter which allows you to use GLSL to smoothly transition between two video streams. This filter is significantly easier to use and customize than the alternatives listed here.
./ffmpeg -i 0.mp4 -i 1.mp4 -filter_complex "gltransition=duration=4:offset=1.5" out.mp4
ok so im not sure if this is the best way to do this but i got it working:
ffmpeg -y -i part1.mp4 -i part2.mp4 -i part3.mp4 -f lavfi -i color=black:s=1920x1080 -filter_complex \
"[0:v]format=pix_fmts=yuva420p,fade=t=out:st=10:d=1:alpha=1,setpts=PTS-STARTPTS[v0]; \
[1:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=1:alpha=1,fade=t=out:st=10:d=1:alpha=1,setpts=PTS-STARTPTS+10/TB[v1]; \
[2:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=1:alpha=1,fade=t=out:st=10:d=1:alpha=1,setpts=PTS-STARTPTS+20/TB[v2]; \
[3:v]trim=duration=30[over]; \
[over][v0]overlay[over1]; \
[over1][v1]overlay[over2]; \
[over2][v2]overlay=format=yuv420[outv]" \
-vcodec libx264 -map [outv] merge.mp4
Related
ffmpeg -i foo.mp4 -filter_complex "fade=d=0.5, reverse, fade=d=0.5, reverse" output.mp4
can be used to fade in and out foo.mp4 video. (we do not care about audio). According to https://video.stackexchange.com/questions/19867/how-to-fade-in-out-a-video-audio-clip-with-unknown-duration
It's good but only works in the simple situation of 1 input video, and 1 output. Now, how can I apply the fade in and out effect in the following more complex situation? I'm trying to concat a.jpg (a picture) with bar.mp4. I want only the bar.mp4 portion to fade in and out.
ffmpeg -loop 1 -t 2 -framerate 1 -i a.jpg -f lavfi -t 2 -i anullsrc -r 24 -i bar.mp4 -filter_complex "[0][1][2:v][2:a] concat=n=2:v=1:a=1 [vpre][a];[vpre]fps=24[v]" -map "[v]" -map "[a]" out.mp4 -y
Of course, I could first create a temporary temp.mp4 from bar.mp4 by running the first command, then input temp.mp4 in my second command. This involves an extra step and extra encoding. Could anyone help fix the commands or suggest something even better?
Use
ffmpeg -loop 1 -t 2 -framerate 24 -i a.jpg -f lavfi -t 2 -i anullsrc -r 24 -i bar.mp4 -filter_complex "[2:v]fade=d=0.5, reverse, fade=d=0.5, reverse[v2];[0][1][v2][2:a] concat=n=2:v=1:a=1 [v][a]" -map "[v]" -map "[a]" out.mp4 -y
I'm trying to split a video (50-100Mo) into several small clips of a few seconds each. I don't need re-encoding, hence my use of codec copy.
However some of the resulting clips don't have any video.
Fast but no video in some files
ffmpeg \
-y \
-i ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.mkv \
-ss 0:00:07.00 \
-codec copy \
-loglevel error \
-to 0:00:10.36 \
'raw/0:00:07.00.au revoir.mkv'
I also tried -map 0 -c copy, -acodec copy -map 0:a -vcodec copy -map 0:v or no option related to codec.
Slow but complete
No argument related to audio/video encoding, it's working but pretty slow.
ffmpeg -y \
-i "$SOURCE_VIDEO_FILE" \
-ss 0:05:37.69 \
-to 0:05:40.64 \
-loglevel error
'raw/0:05:37.69.pas la peine.mkv'
Question
How do I split a video into small chunk ~2-4s when I have no need for re-encoding?
related: https://video.stackexchange.com/q/25365/23799
Your constraint can't be satisfied. Some video codecs appear to use chunks, where they start with a complete frame and then store "diffs", so in order to use -vcodec copy, ffmpeg has to honour the chunk boundaries.
Don't use -vcodec copy if you encounter this problem.
I'm trying to combine two ffmpeg operations into a single one.
Currently I have two sets of ffmpeg commands that first generate a video from existing images, then runs that video through ffmpeg again to apply a watermark.
I'd like to see if its possible to combine these into a single operation.
# Create the source video
ffmpeg -y \
-framerate 1/1 \
-i layer-%d.png \
-r 30 -vcodec libx264 -preset ultrafast -crf 23 -pix_fmt yuv420p \
output.mp4
# Apply the watermark and render the final output
ffmpeg -y \
-i output.mp4 \
-i logo.png \
-filter_complex "[1:v][0:v]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
Use
ffmpeg -y \
-framerate 1/1 -i layer-%d.png \
-i logo.png \
-filter_complex "[0:v]fps=30[img];
[1:v][img]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
(The use of scale2ref doesn't make sense since you're scaling to a fixed size).
I am currently cutting a video and after it is cut I am adding an layer of text like this:
$cut_video_cmd = 'ffmpeg -i "'.$video_path.'" -vf scale=640:-1 -ss 30 -t 10 "'.$video_path.'"';
$add_text_to_video_cmd = 'ffmpeg -i "'.$video_path.'" -vf drawtext="fontfile='.public_path('assets/fonts/Roboto-Regular.ttf').': \
text=\'Stack Overflow\': fontcolor=white: fontsize=24: box=1: boxcolor=black#0.5: \
boxborderw=5: x=(w-text_w)/2: y=(h-text_h)/2" -codec:a copy "'.$video_path.'.overlay.mp4"';
It works great, but I am wondering if there is a way to combine these two commands? Or any way to simplify this process? I am failing to figure this out.
Thanks a lot for any help!
Chain linear filters with a comma. Simplified example of your command:
ffmpeg -i input -ss 30 -t 10 -vf scale=640:-2,drawtext -codec:a copy output
This question already has answers here:
Vertically or horizontally stack (mosaic) several videos using ffmpeg? [closed]
(3 answers)
Closed 5 years ago.
I found this answer for combining 2 videos using Ffmpeg
ffmpeg.exe -i LeftInput.mp4 -vf "[in] scale=iw/2:ih/2, pad=2*iw:ih [left];
movie=RightInput.mp4, scale=iw/3:ih/3, fade=out:300:30:alpha=1 [right];
[left][right] overlay=main_w/2:0 [out]" -b:v 768k Output.mp4
Is there a way to combine more than 2?
I tried adding [bottom] and [upper] but I'm failing to understand how the overlay works and where do I put more videos.
Use the FFmpeg hstack and vstack filters:
ffmpeg -i input0 -i input1 -i input2 -i input3 -filter_complex \
"[0:v][1:v]hstack[top]; \
[2:v][3:v]hstack[bottom]; \
[top][bottom]vstack" \
output
If you want to combine the audio add the amerge filter:
ffmpeg -i input0 -i input1 -i input2 -i input3 -filter_complex \
"[0:v][1:v]hstack[top]; \
[2:v][3:v]hstack[bottom]; \
[top][bottom]vstack[v]; \
[0:a][1:a][2:a][3:a]amerge=inputs=4[a]" \
-map "[v]" -map "[a]" -ac 2 output