Concatenating video clip with static image causes buffer errors - image

I'm trying to concatenate a 15 second clip of a video (MOVIE.mp4) with 5 seconds (no audio) of an image (IMAGE.jpg) using FFmpeg.
Something seems to be wrong with my filtergraph, although I'm unable to determine what. The command I've put together is the following:
ffmpeg \
-loop 1 -t 5 -I IMAGE.jpg \
-t 15 -I MOVIE.mp4 \
-filter_complex "[0:v]scale=480:640[1_v];anullsrc[1_a];[1:v][1:a][1_v][1_a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Unfortunately, this seems to be creating some strange results:
On my personal computer (FFmpeg 4.2.1) it correctly concatenates the movie with the static image; however, the static image lasts for an unbounded length of time. (After entering ctrl-C, the movie is still viewable, but is of an extremely long length--e.g., 35 min--depending on when I interrupt the process.)
On a remote machine where I need to do the ultimate video processing (FFmpeg 2.8.15-0ubuntu0.16.04.1), the command does not terminate, and instead, I get cascading errors of the following form:
Past duration 0.611458 too large
...
[output stream 0:0 # 0x21135a0] 100 buffers queued in output stream 0:0, something may be wrong.
...
[output stream 0:0 # 0x21135a0] 100000 buffers queued in output stream 0:0, something may be wrong.
I haven't been able to find much documentation that elucidates what these errors mean, so I don't know what's going wrong.

As Gyan pointed out, you only have to add atrim to your audio:
anullsrc,atrim=0:5[silent-audio]
Instead of scale you could use scale2ref and setsar to automatically make your image the same size and aspect ratio as the video.
ffmpeg \
-loop 1 -t 5 -i IMAGE.jpg \
-t 15 -i MOVIE.mp4 \
-filter_complex "[0:v][1:v]scale2ref[img][v];[img]setsar=1[img]; \
anullsrc,atrim=0:5[silent-audio];[v][1:a][img]
[silent-audio]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Alternatively you could use anullsrc as a 3rd input:
ffmpeg \
-t 15 -i MOVIE.mp4 \
-loop 1 -t 5 -i IMAGE.jpg \
-f lavfi -t 5 -i anullsrc \
-filter_complex "[1:v][0:v]scale2ref[img][v];\
[img]setsar=1[img];[v][0:a][img][2:a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4

Related

FFMPEG trimming video (using select between) doesn't affect overall duration

Completely new to working with FFMPEG, what I'm trying to achieve is applying overlaying graphics at certain positions and times, and cutting out sections of a single input video.
I've worked out the overlaying graphics, so this code is working:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0];
[ov0] overlay=192:108:enable='between(t, 5, 10)'" \
-pix_fmt yuv420p output_overlayed.mp4
But when I try to cut out sections using this code:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0]; \
[ov0] overlay=192:108:enable='between(t, 5, 10)', \
select='between(t,0,5)+between(t,10,15)', \
setpts='N/FRAME_RATE/TB'" \
-pix_fmt yuv420p output_overlayed_trimmed.mp4
It seems to cut correctly, so the original video starts playing from 0 seconds until 5 seconds and then plays from 10 seconds in until 15 seconds and cuts out. But after the point where the video cuts out it's just a black screen for the duration of the video. I can't seem to get it to work so it affects the overall duration of the video.
(The values being passed in are just examples by the way, eg. I've got it to start an overlay 5 seconds in but also cut 5 seconds in)
I have the timestamps for when the overlays should appear on the non-trimmed video, so the overlaying should happen first and then the trimming. If the video is trimmed first then the overlays will appear at the wrong times.
An alternative way of achieving this that is currently working is by performing the first line of code (which just produces a new video file with the overlay) and then separately take this new file and perform the trimming independently:
ffmpeg -ss 0 -to 5 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-ss 15 -to 20 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-filter_complex "[0][1]concat=n=2:v=1:a=1" output_trimmed.mp4
But this means working with 2 separate files and then having to remove the first after the 2nd execution is complete. Ideally I'd combine them into one command which doesn't produce multiple files.
Would appreciate any help - thanks!
How about get the input twice with different trims (so both video & audio are cut in sync) then concatenate after overlaying? Like this:
ffmpeg -t 5 -i /Users/username/projectdir/static/video.mp4 \
-ss 10 -to 15 -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][2:v] overlay=192:108[ov0]; \
[1:v][3:v] overlay=192:108[ov1]; \
[ov0][0:a][ov1][1:a] concat=n=2:v=2:a=2[vout][aout] \
-map [vout] -map[aout] -pix_fmt yuv420p output_overlayed_trimmed.mp4

First frame not getting blur from ffmpeg

I am trying to run a combination of overlay and boxblur filters to blur pixels in human background (using ffmpeg). My command is as follows:
ffmpeg -y \
-ss 00:00:02.000 -t 00:00:02.000 -i Adeel_Abbas_20210423T191428-1X.MP4 \
-ss 00:00:02.000 -t 00:00:02.000 -i Adeel_Abbas_20210423T191428-1X-SM.MP4 \
-filter_complex " \
[0:v][1:v] alphamerge[fg0]; [0:v]boxblur=luma_radius=3:chroma_radius=0:alpha_radius=0:luma_power=1:chroma_power=0:alpha_power=0[bv0]; [bv0][fg0]overlay,crop=1920:1080:0:78,setpts=PTS-STARTPTS[vs0]" -map "[vs0]" \
-b:v 8709120 -bf 2 -threads 8 -tag:v avc1 -c:v libx264 -preset medium -g 144 -color_primaries bt709 -color_trc bt709 -colorspace smpte170m output.mp4
Here are sample files that can be used to perfectly reproduce this issue.
Using ffmpeg-5.0 the background in first frame will not get blur applied but successive frames will get blurred properly. Can someone please help
Only happens if I seek from random places in original file using -ss 00:00:02.000 -t 00:00:02.000 command. If I start processing from beginning of file by omitting -ss flag, everything looks good.

ffmpeg, add static image to beginning and end with transitions

ffmpeg noob here, trying to help my mother with some videos for real estate walkthroughs. I'd like to set up a simple pipeline that I can run videos through and have outputted as such:
5 second (silent) title card ->
xfade transition ->
property walk through ->
xfade transition ->
5 second (silent) title card
Considerations:
The intro / outro card will be the same content.
The input walkthrough videos will be of variable length so, if possible, a dynamic solution accounting for this would be ideal. If this requires me to script something using ffprobe, I can do that - just need to gain an understanding of the syntax and order of operations.
The video clip will come in with some audio already overlaid. I would like for the title cards to be silent, and have the video/audio clip fade in/out together.
I have gotten a sample working without the transitions:
ffmpeg -loop 1 -t 5 -i title_card.jpg \
-i walkthrough.MOV \
-f lavfi -t 0.1 -i anullsrc \
-filter_complex "[0][2][1:v][1:a][0][2]concat=n=3:v=1:a=1[v][a]" \
-map "[v]" -map "[a]" \
-vcodec libx265 \
-crf 18 \
-vsync 2 \
output_without_transitions.mp4
I have been unable to get it to work with transitions. See below for the latest iteration:
ffmpeg -loop 1 -t 5 -r 60 -i title_card.jpg \
-r 60 -i walkthrough.MOV \
-f lavfi -t 0.1 -i anullsrc \
-filter_complex \
"[0][1:v]xfade=transition=fade:duration=0.5:offset=4.5[v01]; \
[v01][0]xfade=transition=fade:duration=0.5:offset=12.8[v]" \
-map "[v]" \
-vcodec libx265 \
-crf 18 \
-vsync 2 \
output_with_transitions.mp4
This half-works, resulting in the initial title card, fading into the video, but the second title card never occurs. Note, I also removed any references to audio, in an effort to get the transitions alone to work.
I have been beating my head against the wall on this, so help would be appreciated :)
Assuming walkthrough.MOV is 10 seconds long:
ffmpeg -loop 1 -t 5 -framerate 30 -i title_card.jpg -i walkthrough.MOV -filter_complex "[0]settb=AVTB,split[begin][end];[1:v]settb=AVTB[main];[begin][main]xfade=transition=fade:duration=1:offset=4[xf];[xf][end]xfade=transition=fade:duration=1:offset=13,format=yuv420p[v];[1:a]adelay=4s:all=1,afade=t=in:start_time=4:duration=1,afade=t=out:start_time=13:duration=1,apad=pad_dur=4[a]" -map "[v]" -map "[a]" -c:v libx265 -crf 18 -movflags +faststart output.mp4
You will need to upgrade your ffmpeg for this to work. The current release version (4.3 as of this answer) is too old, so get a build from the git master branch. See FFmpeg Download for links to builds for your OS, or see FFmpeg Wiki: Compile Guide.
title_card.jpg frame rate, width, and height must match walkthrough.MOV.
See Merging multiple video files with ffmpeg and xfade filter to see how to calculate xfade and afade offsets.
See FFmpeg Filter documentation for details on each filter.
See How to get video duration in seconds? which can help you automate this via scripting.
apad is supposed to automatically work with -shortest, but it doesn't with -filter_complex. So pad_dur is used to add the additional silence to the last title image, but whole_dur can be used instead if that is easier for you. Another method is to use anullsrc as in your question, then concatenate audio only with the concat filter, but I wanted to show adelay+apad as a viable alternative.

FFMPEG zoom-pan multiple images

I am trying to stitch multiple images with some zoom-pan happening on the images to create a video.
Command:-
ffmpeg -f lavfi -r 30 -t 10 -i \
color=#000000:1920x1080 \
-f lavfi \
-r 30 -t 10 \
-i aevalsrc=0 \
-i "image-1.png" \
-i "image-2.png" \
-y -filter_complex \
"[0:v]fifo[bg];\
[2:v]setpts=PTS-STARTPTS+0/TB,scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0, 5)'[bg];\
[3:v]setpts=PTS-STARTPTS+5.07/TB,scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v3];\
[bg][v3]overlay=0:0:enable='between(t,5, 10)'[bg];\
[1:a]amix=inputs=1:duration=first:dropout_transition=0" \
-map "[bg]" -vcodec "libx264" -preset "veryfast" -crf "15" "output.mp4"
The output is not as expected, it only zooms only on the first image, the second image is just static.
FFMPEG version - 4.1
Use
ffmpeg -f lavfi -i color=#000000:1920x1080:r=30:d=10 \
-f lavfi -t 10 -i anullsrc \
-i "image-1.png" \
-i "image-2.png" \
-filter_complex \
"[2:v]scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0,5)'[bg];\
[3:v]scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080',setpts=PTS+5/TB[v3];\
[bg][v3]overlay=0:0:enable='between(t,5,10)'[bg];\
-map "[bg]" -map 1:a -vcodec libx264 -preset veryfast -crf 15 -y "output.mp4"
For lavfi sources, it's best to set frame rate and duration where applicable within the filter.
Since you're not looping the images, -t won't have any effect. Since zoompan will set fps in its output, you can skip input rate setting. And since it's a single image, setpts before zoompan has no relevance. It should be set only on the zoompan whose timestamps need to be shifted.
Since you've only one audio, no point sending it to amix - there's nothing to mix with! Just map it directly.

Splitting video in short clip result in some being empty?

I'm trying to split a video (50-100Mo) into several small clips of a few seconds each. I don't need re-encoding, hence my use of codec copy.
However some of the resulting clips don't have any video.
Fast but no video in some files
ffmpeg \
-y \
-i ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.mkv \
-ss 0:00:07.00 \
-codec copy \
-loglevel error \
-to 0:00:10.36 \
'raw/0:00:07.00.au revoir.mkv'
I also tried -map 0 -c copy, -acodec copy -map 0:a -vcodec copy -map 0:v or no option related to codec.
Slow but complete
No argument related to audio/video encoding, it's working but pretty slow.
ffmpeg -y \
-i "$SOURCE_VIDEO_FILE" \
-ss 0:05:37.69 \
-to 0:05:40.64 \
-loglevel error
'raw/0:05:37.69.pas la peine.mkv'
Question
How do I split a video into small chunk ~2-4s when I have no need for re-encoding?
related: https://video.stackexchange.com/q/25365/23799
Your constraint can't be satisfied. Some video codecs appear to use chunks, where they start with a complete frame and then store "diffs", so in order to use -vcodec copy, ffmpeg has to honour the chunk boundaries.
Don't use -vcodec copy if you encounter this problem.

Resources