Completely new to working with FFMPEG, what I'm trying to achieve is applying overlaying graphics at certain positions and times, and cutting out sections of a single input video.
I've worked out the overlaying graphics, so this code is working:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0];
[ov0] overlay=192:108:enable='between(t, 5, 10)'" \
-pix_fmt yuv420p output_overlayed.mp4
But when I try to cut out sections using this code:
ffmpeg -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][1:v] overlay=192:108:enable='between(t, 0, 5)'[ov0]; \
[ov0] overlay=192:108:enable='between(t, 5, 10)', \
select='between(t,0,5)+between(t,10,15)', \
setpts='N/FRAME_RATE/TB'" \
-pix_fmt yuv420p output_overlayed_trimmed.mp4
It seems to cut correctly, so the original video starts playing from 0 seconds until 5 seconds and then plays from 10 seconds in until 15 seconds and cuts out. But after the point where the video cuts out it's just a black screen for the duration of the video. I can't seem to get it to work so it affects the overall duration of the video.
(The values being passed in are just examples by the way, eg. I've got it to start an overlay 5 seconds in but also cut 5 seconds in)
I have the timestamps for when the overlays should appear on the non-trimmed video, so the overlaying should happen first and then the trimming. If the video is trimmed first then the overlays will appear at the wrong times.
An alternative way of achieving this that is currently working is by performing the first line of code (which just produces a new video file with the overlay) and then separately take this new file and perform the trimming independently:
ffmpeg -ss 0 -to 5 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-ss 15 -to 20 -i /Users/username/projectdir/static/output_overlayed.mp4 \
-filter_complex "[0][1]concat=n=2:v=1:a=1" output_trimmed.mp4
But this means working with 2 separate files and then having to remove the first after the 2nd execution is complete. Ideally I'd combine them into one command which doesn't produce multiple files.
Would appreciate any help - thanks!
How about get the input twice with different trims (so both video & audio are cut in sync) then concatenate after overlaying? Like this:
ffmpeg -t 5 -i /Users/username/projectdir/static/video.mp4 \
-ss 10 -to 15 -i /Users/username/projectdir/static/video.mp4 \
-i overlay.png -i overlay2.png \
-filter_complex "[0:v][2:v] overlay=192:108[ov0]; \
[1:v][3:v] overlay=192:108[ov1]; \
[ov0][0:a][ov1][1:a] concat=n=2:v=2:a=2[vout][aout] \
-map [vout] -map[aout] -pix_fmt yuv420p output_overlayed_trimmed.mp4
Related
I am trying to making video with ffmpeg where I want to overlay images on a video.
I want to show the image for 5 secound each and want to the process to loop until the video end.
I am using following commend which working perfectly but want to modify to loop the images.
ffmpeg -y -i long_process/2-scrolling.mp4 \
-i upload-images/040820221255452.png \
-i upload-images/040820221255453.png \
-filter_complex "[0:v][1:v]overlay=75:(H-h)/2:enable='between(t, 1, 5)'[v0]; \
[v0][2:v]overlay=75:(H-h)/2:enable='between(t, 5, 10)'" \
-c:a copy long_process/output.mp4
I am very new to ffmpeg looking for help from you.
Thanks in advance
I got the answer
ffmpeg -y -i long_process/2-scrolling.mp4 -framerate 1/3 -pattern_type glob -loop 1 -i 'tools/*.png' \
-filter_complex "[0]overlay=75:(H-h)/2:shortest=1" \
-r 60 -c:a copy long_process/output.mp4
ffmpeg noob here, trying to help my mother with some videos for real estate walkthroughs. I'd like to set up a simple pipeline that I can run videos through and have outputted as such:
5 second (silent) title card ->
xfade transition ->
property walk through ->
xfade transition ->
5 second (silent) title card
Considerations:
The intro / outro card will be the same content.
The input walkthrough videos will be of variable length so, if possible, a dynamic solution accounting for this would be ideal. If this requires me to script something using ffprobe, I can do that - just need to gain an understanding of the syntax and order of operations.
The video clip will come in with some audio already overlaid. I would like for the title cards to be silent, and have the video/audio clip fade in/out together.
I have gotten a sample working without the transitions:
ffmpeg -loop 1 -t 5 -i title_card.jpg \
-i walkthrough.MOV \
-f lavfi -t 0.1 -i anullsrc \
-filter_complex "[0][2][1:v][1:a][0][2]concat=n=3:v=1:a=1[v][a]" \
-map "[v]" -map "[a]" \
-vcodec libx265 \
-crf 18 \
-vsync 2 \
output_without_transitions.mp4
I have been unable to get it to work with transitions. See below for the latest iteration:
ffmpeg -loop 1 -t 5 -r 60 -i title_card.jpg \
-r 60 -i walkthrough.MOV \
-f lavfi -t 0.1 -i anullsrc \
-filter_complex \
"[0][1:v]xfade=transition=fade:duration=0.5:offset=4.5[v01]; \
[v01][0]xfade=transition=fade:duration=0.5:offset=12.8[v]" \
-map "[v]" \
-vcodec libx265 \
-crf 18 \
-vsync 2 \
output_with_transitions.mp4
This half-works, resulting in the initial title card, fading into the video, but the second title card never occurs. Note, I also removed any references to audio, in an effort to get the transitions alone to work.
I have been beating my head against the wall on this, so help would be appreciated :)
Assuming walkthrough.MOV is 10 seconds long:
ffmpeg -loop 1 -t 5 -framerate 30 -i title_card.jpg -i walkthrough.MOV -filter_complex "[0]settb=AVTB,split[begin][end];[1:v]settb=AVTB[main];[begin][main]xfade=transition=fade:duration=1:offset=4[xf];[xf][end]xfade=transition=fade:duration=1:offset=13,format=yuv420p[v];[1:a]adelay=4s:all=1,afade=t=in:start_time=4:duration=1,afade=t=out:start_time=13:duration=1,apad=pad_dur=4[a]" -map "[v]" -map "[a]" -c:v libx265 -crf 18 -movflags +faststart output.mp4
You will need to upgrade your ffmpeg for this to work. The current release version (4.3 as of this answer) is too old, so get a build from the git master branch. See FFmpeg Download for links to builds for your OS, or see FFmpeg Wiki: Compile Guide.
title_card.jpg frame rate, width, and height must match walkthrough.MOV.
See Merging multiple video files with ffmpeg and xfade filter to see how to calculate xfade and afade offsets.
See FFmpeg Filter documentation for details on each filter.
See How to get video duration in seconds? which can help you automate this via scripting.
apad is supposed to automatically work with -shortest, but it doesn't with -filter_complex. So pad_dur is used to add the additional silence to the last title image, but whole_dur can be used instead if that is easier for you. Another method is to use anullsrc as in your question, then concatenate audio only with the concat filter, but I wanted to show adelay+apad as a viable alternative.
I've been playing around with ffmpeg over the past months and can't get rid of an issue I'm facing when adding a GIF file as an overlay.
Basically what I'm trying to achieve is to add a transparent GIF animation as an overlay on top of a MP4 video.
Please find below an example command that I'm using:
ffmpeg \
-i 0689a8a9-43b5-45d2-b0e8-acbea6905ce1.mp4 \
-ignore_loop 0 \
-i 02a6e696-969b-4a90-9444-e4b0b4d6f6da.gif \
-t 10.000000 \
-filter_complex "[0:v][1:v]overlay=enable='between(t, 1, 3)'[overlay]" \
-map '[overlay]' \
-pix_fmt yuv420p \
output.mp4
For a better understanding, please note that:
-ignore_loop 0 allows me to loop the animation as long as the overlay is enabled
-t makes my video last 10s
overlay=enable='between(t, 1.0, 3.0)' sets the interval during which it's visible
However, when I run this command, a very few milliseconds before the GIF disapears (at 3s), it starts blinking. If I run take a look at it frame by frame, it actually disappears from the video, then comes back, and eventually goes away as expected.
Please find an example with a black background and a random GIF from giphy at this link. The assets can be found here.
I'm probably missing something here. Do you have any hints ?
I'm running ffmpeg in 4.3.1.
Thank you in advance
I can replicate this with an arbitrary gif. I suspect a bug in the overlay filter. Feel free to present this to https://trac.ffmpeg.org.
This happens as soon as the temporal filtering is set (filter is listed as having timeline support) and furthermore changes depending on the time boundaries. The latter should never be the case.
MWE
ffmpeg \
-t 10 -s qcif -f rawvideo -pix_fmt rgb24 -r 25 -i /dev/zero \
-ignore_loop 0 -i 'https://media.tenor.com/images/c50ca435dffdb837914e7cb32c1e7edf/tenor.gif' \
-filter_complex "overlay=enable='between(t,3,7)'" \
-f flv - | ffplay -
You could try, as a workaround, converting the gif to an mp4 (ffmpeg -re -i <gif> [...]) and set the white areas to transparent.
In the official FFmpeg community there is a ticket for this, which hasn't been fixed though:
https://trac.ffmpeg.org/ticket/4803
The ticket mentions that a GIF is being shown before the specified enable time. On my tests, given a 60 fps video, a 10.42 fps GIF (that needs to be shown from 5 to 10 s) blinks once 5 frames before the desired time (at 4.933 s) and becomes visible again right at 5 s. Should be something connected to the GIF's fps which doesn't match the video's fps.
Anyways, I've found the most elegant workaround, which solves the problem in a single pass and doesn't require converting GIFs to temporary MP4s (because in some cases that could be undesired). So, given the video fps, to overlay a GIF at a certain position (x=10, y=20) from 5 to 10 s without blinking we should use the following:
ffmpeg -y -i "video.mp4" -ignore_loop 0 -i "giphy.gif" \
-filter_complex "[1:v]fps=60[gif];[0:v][gif]overlay=x=10:y=20:enable='between(t,5,10)'" \
-c:a copy -shortest "overlay.mp4"
We can go further and come up with a command line which doesn't require a prior knowledge of the video fps (but you should know the output video fps instead, which is 60 fps in this case):
ffmpeg -y -i "video.mp4" -ignore_loop 0 -i "giphy.gif" \
-filter_complex "[0:v]fps=60[video];[1:v]fps=60[gif];[video][gif]overlay=x=10:y=20:enable='between(t,5,10)'" \
-acodec copy -shortest "overlay.mp4"
I'm trying to concatenate a 15 second clip of a video (MOVIE.mp4) with 5 seconds (no audio) of an image (IMAGE.jpg) using FFmpeg.
Something seems to be wrong with my filtergraph, although I'm unable to determine what. The command I've put together is the following:
ffmpeg \
-loop 1 -t 5 -I IMAGE.jpg \
-t 15 -I MOVIE.mp4 \
-filter_complex "[0:v]scale=480:640[1_v];anullsrc[1_a];[1:v][1:a][1_v][1_a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Unfortunately, this seems to be creating some strange results:
On my personal computer (FFmpeg 4.2.1) it correctly concatenates the movie with the static image; however, the static image lasts for an unbounded length of time. (After entering ctrl-C, the movie is still viewable, but is of an extremely long length--e.g., 35 min--depending on when I interrupt the process.)
On a remote machine where I need to do the ultimate video processing (FFmpeg 2.8.15-0ubuntu0.16.04.1), the command does not terminate, and instead, I get cascading errors of the following form:
Past duration 0.611458 too large
...
[output stream 0:0 # 0x21135a0] 100 buffers queued in output stream 0:0, something may be wrong.
...
[output stream 0:0 # 0x21135a0] 100000 buffers queued in output stream 0:0, something may be wrong.
I haven't been able to find much documentation that elucidates what these errors mean, so I don't know what's going wrong.
As Gyan pointed out, you only have to add atrim to your audio:
anullsrc,atrim=0:5[silent-audio]
Instead of scale you could use scale2ref and setsar to automatically make your image the same size and aspect ratio as the video.
ffmpeg \
-loop 1 -t 5 -i IMAGE.jpg \
-t 15 -i MOVIE.mp4 \
-filter_complex "[0:v][1:v]scale2ref[img][v];[img]setsar=1[img]; \
anullsrc,atrim=0:5[silent-audio];[v][1:a][img]
[silent-audio]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Alternatively you could use anullsrc as a 3rd input:
ffmpeg \
-t 15 -i MOVIE.mp4 \
-loop 1 -t 5 -i IMAGE.jpg \
-f lavfi -t 5 -i anullsrc \
-filter_complex "[1:v][0:v]scale2ref[img][v];\
[img]setsar=1[img];[v][0:a][img][2:a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
I am making an online course, and to avoid piracy distribution I thought to put watermarks on the videos (including personal user information) so it cannot upload to sharing websites. Now the hard part: I would move the watermark during the video, in 3/4 random positions, every 30 seconds.
It is possibile with ffmpeg?
Edit: this is an adaptation of the answer in LN's link, which will randomize the position every 30 seconds with no repeats:
ffmpeg -i input.mp4 \
-vf \
"drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5:text='studentname': \
x=if(eq(mod(t\,30)\,0)\,rand(0\,(W-tw))\,x): \
y=if(eq(mod(t\,30)\,0)\,rand(0\,(H-th))\,y)" \
-c:v libx264 -crf 23 -c:a copy output.mp4
Older answer
You can use a command like the one below:
ffmpeg -i input.mp4 \
-vf \
"drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5: \
text='studentname':x=200:y=350:enable='between(mod(t\,30*3),0,30)', \
drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5: \
text='studentname':x=1000:y=600:enable='between(mod(t\,30*3),31,60)', \
drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5: \
text='studentname':x=450:y=50:enable='between(mod(t\,30*3),61,90)'" \
-c:v libx264 -crf 23 -c:a copy output.mp4
Here, three positions are rotated with a change occurring every 30 seconds. Each x:y parameter is manually set. If you're calling the command from a shell script, you can use a random number generator and feed that into the command. There is a random function included in the drawtext filter, but it is evaluated each frame, so that will result in a pseudo ping pong game with the text.