I am trying to assemble 3 videos (static title) (main feature) (static trailer). The title and trailer are encoded text with the main feature being h264 encoded (at 6Mbs). The title and trailer have nul audio encoded. The specific goal is a crossfade between the three segments. I have concat working fine, but adding crossfade is causing me issues.
How does setpts=PTS-STARTPTS+(4/TB)[v2]; work?
This code puts it together, but the bit rate and errors are wrong.
ffmpeg -y -i title.mp4 -i vid.mp4 -i trailer.mp4 -f lavfi -i color=black:s=1920x1080 -filter_complex \
"[0:v]format=pix_fmts=yuva420p,fade=t=out:st=04:d=2:alpha=1,setpts=PTS-STARTPTS[v0]; \
[1:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1,fade=t=out:st=6:d=1:alpha=1,setpts=PTS-STARTPTS+10/TB[v1]; \
[2:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1,fade=t=out:st=2:d=1:alpha=1,setpts=PTS-STARTPTS+20/TB[v2]; \
[3:v]trim=duration=30[over]; \
[over][v0]overlay[over1]; \
[over1][v1]overlay[over2]; \
[over2][v2]overlay=format=yuv420[outv]" \
-vcodec h264_videotoolbox -b:v 6000k -maxrate 6000k -bufsize 6000000 -map [outv] merge.mp4
Related
I have three videos: let's call them intro, recording and outro. My ultimate goal is to stitch them together like so:
Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.
I've figured out how to make the thing work correctly for intro + recording:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[rv][0:v]overlay[v];[0:a][ra]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
However I can't use the tpad trick for the outro because it would render black frames over everything.
I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]asetpts=PTS+26/TB[outa]; \
[rv][0:v]overlay[v4]; \
[0:a][ra]amix[a4]; \
[v4][outv]overlay[v]; \
[a4][outa]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach?
In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.
Thank you!
Use adelay for all audio adjustments. Perform all mixing in a single amix.
Set the outro overlay to start only at the correct timestamps.
Use
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[mainv]; \
[1:a]adelay=delays=10s:all=1[maina]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]adelay=delays=26s:all=1[outa]; \
[mainv][0:v]overlay=eof_action=pass[previd]; \
[previd][outv]overlay=enable='gte(t,26)'[v]; \
[maina][0:a][outa]amix=inputs=3[a]; \
-map "[v]" -map "[a]" \
-c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
-movflags +faststart \
out.mp4 -y
I am trying to stitch multiple images with some zoom-pan happening on the images to create a video.
Command:-
ffmpeg -f lavfi -r 30 -t 10 -i \
color=#000000:1920x1080 \
-f lavfi \
-r 30 -t 10 \
-i aevalsrc=0 \
-i "image-1.png" \
-i "image-2.png" \
-y -filter_complex \
"[0:v]fifo[bg];\
[2:v]setpts=PTS-STARTPTS+0/TB,scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0, 5)'[bg];\
[3:v]setpts=PTS-STARTPTS+5.07/TB,scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v3];\
[bg][v3]overlay=0:0:enable='between(t,5, 10)'[bg];\
[1:a]amix=inputs=1:duration=first:dropout_transition=0" \
-map "[bg]" -vcodec "libx264" -preset "veryfast" -crf "15" "output.mp4"
The output is not as expected, it only zooms only on the first image, the second image is just static.
FFMPEG version - 4.1
Use
ffmpeg -f lavfi -i color=#000000:1920x1080:r=30:d=10 \
-f lavfi -t 10 -i anullsrc \
-i "image-1.png" \
-i "image-2.png" \
-filter_complex \
"[2:v]scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0,5)'[bg];\
[3:v]scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080',setpts=PTS+5/TB[v3];\
[bg][v3]overlay=0:0:enable='between(t,5,10)'[bg];\
-map "[bg]" -map 1:a -vcodec libx264 -preset veryfast -crf 15 -y "output.mp4"
For lavfi sources, it's best to set frame rate and duration where applicable within the filter.
Since you're not looping the images, -t won't have any effect. Since zoompan will set fps in its output, you can skip input rate setting. And since it's a single image, setpts before zoompan has no relevance. It should be set only on the zoompan whose timestamps need to be shifted.
Since you've only one audio, no point sending it to amix - there's nothing to mix with! Just map it directly.
I'm trying to split a video (50-100Mo) into several small clips of a few seconds each. I don't need re-encoding, hence my use of codec copy.
However some of the resulting clips don't have any video.
Fast but no video in some files
ffmpeg \
-y \
-i ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.mkv \
-ss 0:00:07.00 \
-codec copy \
-loglevel error \
-to 0:00:10.36 \
'raw/0:00:07.00.au revoir.mkv'
I also tried -map 0 -c copy, -acodec copy -map 0:a -vcodec copy -map 0:v or no option related to codec.
Slow but complete
No argument related to audio/video encoding, it's working but pretty slow.
ffmpeg -y \
-i "$SOURCE_VIDEO_FILE" \
-ss 0:05:37.69 \
-to 0:05:40.64 \
-loglevel error
'raw/0:05:37.69.pas la peine.mkv'
Question
How do I split a video into small chunk ~2-4s when I have no need for re-encoding?
related: https://video.stackexchange.com/q/25365/23799
Your constraint can't be satisfied. Some video codecs appear to use chunks, where they start with a complete frame and then store "diffs", so in order to use -vcodec copy, ffmpeg has to honour the chunk boundaries.
Don't use -vcodec copy if you encounter this problem.
I'm trying to combine two ffmpeg operations into a single one.
Currently I have two sets of ffmpeg commands that first generate a video from existing images, then runs that video through ffmpeg again to apply a watermark.
I'd like to see if its possible to combine these into a single operation.
# Create the source video
ffmpeg -y \
-framerate 1/1 \
-i layer-%d.png \
-r 30 -vcodec libx264 -preset ultrafast -crf 23 -pix_fmt yuv420p \
output.mp4
# Apply the watermark and render the final output
ffmpeg -y \
-i output.mp4 \
-i logo.png \
-filter_complex "[1:v][0:v]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
Use
ffmpeg -y \
-framerate 1/1 -i layer-%d.png \
-i logo.png \
-filter_complex "[0:v]fps=30[img];
[1:v][img]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
(The use of scale2ref doesn't make sense since you're scaling to a fixed size).
I have 1080p webm video and 500x300 mp4 video. How could I place muted mp4 video on top-center position of webm video with transparency? The output file format needed ".webm". Here what similar code I found, but it uses two mp4 videos and second video scales full width on front of first one:
ffmpeg \
-i in1.mp4 -i in2.mp4 \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS, scale=480x360[top]; \
[1:v]setpts=PTS-STARTPTS, scale=480x360, \
format=yuva420p,colorchannelmixer=aa=0.5[bottom]; \
[top][bottom]overlay=shortest=1" \
-vcodec libx264 out.mp4
Output log:
Use
ffmpeg \
-i in1.webm -i in2.mp4 \
-filter_complex " \
[0:v]setpts=PTS-STARTPTS[base]; \
[1:v]setpts=PTS-STARTPTS, \
format=yuva420p,colorchannelmixer=aa=0.5[overlay]; \
[base][overlay]overlay=x=(W-w)/2:y=0[v]"
-map "[v]" -map 0:a -c:a copy -shortest out.webm
The output file won't have the input webm's transparency but it can be done if required.