How can I resize an overlay image with ffmpeg? - ffmpeg

I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!

You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)

Related

Ffmpeg/lavfi Is it possible to fade out an image overlay that was loaded with a movie= filter not a -i parameter

I've tried something like:
movie='overlaylogo.png',fade=out:st=5:d=1[logo],[in][logo]overlay='0:0'[out]
But it seems to stick on 100% opacity, adding in a loop=loop-1 and fps=24 filter after the movie= load seem to have little effect, is there some step required to convert an image into a video for fade to apply over?
The key is to keep your stream alive so the fade filter can do its job. An image input dies immediately after it sends out its frame.
With -i we'd do
ffmpeg -i input.mp4 -loop 1 -i logo.png \
-filter_complex [1:v]fade=out:st=5:d=1:alpha=1,[0:v]overlay=0:0:shortest=1[out] \
-map [out] -map 0:a output.mp4
The -loop image2 input option keeps the stream alive. The same thing must be done with movie source filter. It has loop option but with a caveat: "Note that when the movie is looped the source timestamps are not changed, so it will generate non monotonically increasing timestamps." So, you need to add setpts filter to establish monotonically increasing timestamps yourself:
ffmpeg -i input.mp4 \
-vf movie=logo.png:loop=0,setpts=N/FR/TB,fade=out:st=5:d=1,[in]overlay=0:0:shortest=1[out] \
output.mp4
P.S., loop=-1 (not 1) should also work (and it likely won't need the setpts filter to fix the timestamp).

FFMPEG - crop and pad a video (keep 3840x1920 but with black borders)

I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.

Can I dynamic crop two input video then stack them using ffmpeg?

I want to make an effect of switching back and forth between two videos.
I tried to dynamic crop two input video then stack them using ffmpeg:
ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex \
"[0:v]crop=iw:'2+(mod(n,ih))':0:0[a];[1:v]crop=iw:'ih-2-
(mod(n,ih))':0:'2+(mod(n,ih))'[b];[a][b]vstack=inputs=2[v]" \
-map [v] output.mp4
Skip 2 pixel to prevent crop zero.
But the output video is not what I want. It seems '(mod(n,ih))' is zero all the time.
I don't know what's wrong with it.

overlay audio volume histogram over static image

I'm actually working to a project for music video generation using ffmpeg.
I'd like to know if it's possibile to use ffmpeg itself or a combination of command line component under windows environment to make a visualization of audio spectrum (ahistogram ?) over a static background image like the one I found on the web:
Any ideas or coding tips?
ffmpeg -loop 1 -i background.png -i music.mp3 -filter_complex "[1]ahistogram=s=789x50:rheight=1[fg];[0][fg]overlay=(W-w)/2:H-h-10:shortest=1,scale='iw-mod(iw,2)':'ih-mod(ih,2)',format=yuv420p[v]" -map "[v]" -map 1:a -movflags +faststart output.mp4
Not exactly what you want, there is no option to create bars as in your image, but perhaps this will be good enough. See the ahistogram filter documentation for more options.

Is it possible to create a ”timeline” using FFMPEG?

I know it’s possible to get a part of a file using the command:
ffmpeg -ss 00:02:00.000 -i input.mp4" -t 00:00:05.000 out.mp4
But is it possible to combine multiple videos with text and other effects?
I want to create a output from the following
File1.mp4:
Read from 00:02:00.000 to 00:02:05.000
File2.mp4
Read from 00:00:00.000 to 00:01:30.000
Insert overlay image “logo.png” for 20 seconds
File3.mp4
Insert the whole file
Insert text from 00:00:10.000 to 00:00:30.000
It can be done with FFmpeg, but it isn't really an 'editor' so the command will get long, unwieldy and prone to execution errors the more the number of input clips and effects you apply.
That said, one way to do this is using the concat filter.
ffmpeg -i file1.mp4 -i file2.mp4 -i file3.mp4 -loop 1 -t 20 -i logo.png \
-filter_complex "[0:v]trim=120:125,setpts=PTS-STARTPTS[v1];
[1:v]trim=duration=90,setpts=PTS-STARTPTS[vt2];
[vt2][3:v]overlay=eof_action=pass[v2];
[2:v]drawtext=enable='between(t,10,30)':fontfile=font.ttf:text='Hello World'[v3];
[0:a]atrim=120:125,asetpts=PTS-STARTPTS[a1];
[1:a]trim=duration=90,setpts=PTS-STARTPTS[a2];
[v1][a1][v2][a2][v3][2:a]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
I haven't specified any encoding parameters like codec or bitrate..etc. Assuming you're familiar with those. Also, haven't specified arguments for overlay or drawtext like position..etc. Consult the documentation for a guide to those.

Resources