I have a square video from Snap Spectacles (1088x1088) that I want to overlay on itself zoomed in and blurred.
Example input frame:
Generated zoomed in and blurred background:
Desired output:
I think I can do this with ffmpeg's maskedmerge, but I'm having trouble finding examples.
There's an example of maskedmerge that merges two videos of the same size and dynamically removes a green screen, and another that merges videos with transparency.
Here's the closest I've been able to get:
ffmpeg -i background.jpg -vf "movie=input.jpg[inner];[in][inner] overlay=#{offset}:0 [out]" -c:a copy output.jpg
tl;dr: given the first two frames, how could I generate the third frame (as video)?
Got it!
Like #Mulvya recommended, I needed a circular mask:
Given that mask snapmask.png, a blurred square background video background.mov, and the original video 65B6354F61B4AF02_HD.MOV, they can be merged like this:
ffmpeg -i background.mov -loop 1 -i snapmask.png -filter_complex " \
[1:v]alphaextract, scale=1080:1080 [mask];\
movie=65B6354F61B4AF02_HD.MOV, scale=1080:1080 [original];\
[original][mask] alphamerge [masked];\
[0:v][masked] overlay=420:0;"\
-c:a copy output.mov
You can do one better, though, which is generating the blurred background video on the fly in the same command. Now the only inputs are the original spectacles round video and the circular mask:
ffmpeg -i 65B6354F61B4AF02_HD.MOV -loop 1 -i snapmask.png -filter_complex "\
[0:v]split[a][b];\
[1:v]alphaextract, scale=1080:1080[mask];\
[a]scale=1080:1080 [ascaled];\
[ascaled][mask]alphamerge[masked];\
[b]crop=946.56:532:70.72:278, boxblur=10:5,scale=1920:1080[background];\
[background][masked]overlay=420:0"\
-c:a copy 65B6354F61B4AF02_HD_sq.MOV
That crop=946.56:532:70.72:278 bit is what I found worked best to crop out a rectangular portion of the circular video to zoom into.
It took me a while to wrap my head around the ffmpeg filter system for how to do this, but it's not as scary as I'd initially thought. The basic syntax is [input]command args[output], and commands can be chained without explicitly naming their outputs (like in [1:v]alphaextract, scale=1080:1080[mask]).
Related
I get a darker background, almost like a placeholder for the asset that is to be blended, whereas the background should be transparent, and should show the same colour as the rest of the background.
I have two webm vp9 files that I am trying to blend using FFMPEG blending functionality.
One of the videos is a zoom animation that starts from 1 pixel in size and then increases in size to 50 x 50 pixels.
The other is a solid red background video 640 px x 360 px
At frame 1 the result looks like this:-
At about 0.5 seconds through, the result looks like this:-
At the end of the sequence, the zoom animation webm fills that darker square you see (50 x 50 pixels).
The code to do the blending of the two webm files looks like this:-
filter_complex.extend([
"[0:v][1:v]overlay=0:0:enable='between(t,0,2)'[out1];",
'[out1]split[out1_1][out1_2];',
'[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];',
'[cropped1][2:v]blend=overlay[blended1];',
"[out1_2][blended1]overlay=231:251:enable='between(t,0,2)'[out2]"
])
This overlays a red background onto a white background, making the red background the new background colour.
It then splits the red background into two, so that there is one output for cropping and another output for overlaying.
It then crops the location and size of the layer to be blended out of the red background. We do this because blending works only on an asset of the same size.
It then performs the blend of the zoom animation onto the cropped background.
It then overlays the blended over the red background
Unfortunately I'm unable to attach videos in stackoverflow, otherwise I would have included them.
The full command looks like this:-
ffmpeg -i v1_background.webm -itsoffset 0 -c:v libvpx-vp9 -i v2_red.webm -itsoffset 0 -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v][1:v]overlay=0:0[out1];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];[cropped1][2:v]blend=overlay[blended1];[out1_2][blended1]overlay=231:251" output_video_with_blended_overlaid_asset.mp4
I have checked the input vp9 webm zoom video file by extracting the first frame of the video
ffmpeg -vcodec libvpx-vp9 -i zoom.webm first_frame.png
and inspecting the colours in all the channels in GIMP. The colours (apart from the opaque pixel in the middle) are all zero, including the alpha channel.
Note that I tried adding in all_mode, so that the blend command is blend=all_mode=overlay, however this still shows the darker placeholder under the animation asset. In other words, this command
ffmpeg -i v1_background.webm -itsoffset 0 -c:v libvpx-vp9 -i v2_red.webm -itsoffset 0 -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v][1:v]overlay=0:0[out1];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];[cropped1][2:v]blend=all_mode=overlay[blended1];[out1_2][blended1]overlay=231:251" output_video_with_blended_all_mode_overlay_asset.mp4
also doesn't work
and trying to convert the formats to rgba first doesn't help either, command below is simplified a bit
ffmpeg -c:v libvpx-vp9 -i v2_red.webm -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v]format=pix_fmts=rgba[out1];[1:v]format=pix_fmts=rgba[out2];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:0:0:exact=1,setsar=1[cropped1];[cropped1][out2]blend=all_mode=dodge[blended1];[out1_2][blended1]overlay=50:50" output_video_with_blended_all_mode_dodge_rgba_and_alpha_premultiplied_overlay.mp4
adding in an alpha premultiply didn't help either
ffmpeg -c:v libvpx-vp9 -i v2_red.webm -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v]format=pix_fmts=rgba[out1];[1:v]setsar=1,format=pix_fmts=rgba,geq=r='r(X,Y)*alpha(X,Y)/255':g='g(X,Y)*alpha(X,Y)/255':b='b(X,Y)*alpha(X,Y)/255'[out2];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:0:0:exact=1,setsar=1[cropped1];[cropped1][out2]blend=all_mode=dodge[blended1];[out1_2][blended1]overlay=50:50" output_video_with_blended_all_mode_dodge_rgba_and_alpha_premultiplied_overlay.mp4
Wondering if there is a workaround I could use so that the background stays transparent?
I was looking for maybe a way of changing the input pixel format in the filter_complex stream to see if that works, but couldn't see anything about this.
I have this example video, recorded by Kazam:
https://user-images.githubusercontent.com/1997316/178513325-98513d4c-49d4-4a45-bcb2-196e8a76fa5f.mp4
It's a 1022x728 video.
I need to add a drop shadow identical to the one generated by the "Drop shadow (legacy)" filter of Gimp with the default settings. So, I generated with Gimp a PNG containing only the drop shadow. It's a 1052x758 image:
Now I want to put the video over the image to get a new video with the drop shadow. The wanted effect for the first frame is:
So, the video must be placed over the image. The top-left corner of the video must be in the position 11x11 of the background image.
How can I achieve this result?
I tried without success the following command. What's wrong?
ffmpeg -i shadow.png -i example.mp4 -filter_complex "[0:v][1:v] overlay=11:11'" -pix_fmt yuv420p output.mp4
About the transparency of the PNG background image, if it can't be maintained, then it's okay for the shadow to be on a white background. Otherwise, if it can be maintained by using an animated GIF as the output format, it is better.
The solution is to remove the transparency from shadow.png. Then:
ffmpeg -i example.mp4 -filter_complex "[0:v] palettegen" palette.png
ffmpeg -loop 1 -i shadow.png -i example.mp4 -i palette.png -filter_complex "[1:v] fps=1,scale=1022:-1[inner];[0:v][inner]overlay=11:11:shortest=1[new];[new][2:v] paletteuse[out]" -map '[out]' -y output.gif
The result is exactly what I wanted:
This solution is inspired by the answer https://stackoverflow.com/a/66318325 and by the article https://www.baeldung.com/linux/convert-videos-gifs-ffmpeg
I'm new to ffmpeg and I'm stuck not sure how to continue.
I have two videos that I will like to merge into a single video with background color/image overlay.
The image bellow is what I'm trying to achieve, the rectangle is the main video, the round is another video and the red is the background color/image.
video info:
one.mp4
original size: 1280x720
position/resized: (x20, y20, w980, h:keep-aspect-ration)
two.mp4
original size: 1280x720
position/resized: (x-bottom, y-buttom-left, w:keep-aspect-ration, h200)
So far I've only been able to add a bg-color but with a problem, the audio is gone, not to say that I haven't even added the second video as part of my command.
ffmpeg -f lavfi -i color=c=white:s=1920x1080:r=24 -i video.mp4 -filter_complex "[0:v][1:v]overlay=shortest=1,format=yuv420p[out]" -map "[out]" output.mp4
Any suggestion on how to achieve it?
Thanks in advance.
[edit 3/19: added full command for OP's need]
Try this:
ffmpeg -i one.mp4, -i two.mp4, \
-f lavfi -i color=c=white:s=1920x1080:r=30 \
-f lavfi -i color=c=white:s=400x400:r=1,\
format=yuva444p,\
geq='lum(X,Y)':'cb(X,Y)':'cr(X,Y)':\
a='if(lt((X-200)^2+(Y-200)^2,200^2),0,255)' \
-filter_complex \
[0:v]scale=980:-1[L0];\
[1:v]crop=600:600:20:20,scale=400:-1[L1];\
[L1][3:v]overlay=shortest=1[L2];\
[2:v][L0]overlay=20:20:shortest=1[L3];\
[L3][L2]overlay=W-420:H-420[vout] \
-map [vout] -map 0:a output.mp4
1st color input (2:v): This serves as the background canvas. Its framerate r=30 will be the output framerate.
2nd color input (3:v): This is the circular mask to the cropped two.mp4. Note that I fixed geq filter so the color is properly maintained (my earlier take was made for grayscale video)
one.mp4(0:v) gets scaled to the final size -> [L0]
two.mp4(1:v) is first cropped to square, capturing what to be shown in the circular cutout. crop arguments are w:h:x:y where the x:y is the coordinate of the upper-left corner. The cropped frame is then scaled to the final size (400x400 in this example) -> [L1]
The cropped and scaled two.mp4 ([L1]) is then masked by 3:v input to show through the circular cutout with the first overlay filter.
shortest=1 option for overlay filters to end when the videos ends (not color sources which will go on forever)
The final two overlay filters places the 2 prepared videos to desired places on the canvas. Adjust its arguments as needed to change positions
Because SD (720p) video inputs are converted to HD (1080p) video output, you may not need to do any scaling. If so, remove the scale filters from the filtergraph. For one.mp4, the first filterchain goes, so replace [L0] label with [0:v] in the modified filtergraph.
That should do it. As we went through earlier, when you run it, make sure to remove all the spaces, newlines, and backslashes in the middle of every filtergraph expression.
[original]
An approach I implemented in the past is to manipulate the masking image with geq filter. You can try the following:
For an illustration purpose, I use a rectangle with upper-left corner at (10,10) and lower-right corner at (1500,900) and a circle with its center at (1700,950) and radius 100.
ffmpeg -i input.mp4 \
-vf color=c=red:s=1920x1080:r=1:d=1,\
format=rgba,\
geq=lum='lum(X,Y)':a='if(between(X,10,1500)*between(Y,10,900)+lt((X-1700)^2+(Y-950)^2,100^2),0,255)',\
[v:0]overlay \
output.mp4
color source filter only generate 1 frame (1 fps for 1 second) and overlay apply it to every frame of the input video
color output does not have alpha channel, so you need to convert to a pix_fmt
geq sets the alpha values of all pixels: if coordinate (X,Y) is masked (so it assigns the alpha value of 255 = opaque) or True if (X,Y) is not masked (alpha = 0, 100% transparent). See FFmpeg Expressions Documentation.
overlay gets the input video stream v:0 and overlays the output of geq filter on it.
Because the filtergraph is implemented with -vf output option, inpnnut audio stream will be mapped automatically
[edit single line version]
ffmpeg -i input.mp4 -vf color=c=red:s=1920x1080:r=1:d=1,format=rgba,geq=lum='lum(X,Y)':a='if(between(X,10,1500)*between(Y,10,900)+lt((X-1700)^2+(Y-950)^2,100^2),0,255)',[v:0]overlay output.mp4
I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.
I've pieced together 3 commands but my solution involves writing a number of tempory files. I would ultimately like to pipe the output of one command into the next command, without the temporary files.
Although many questions discuss filter-complex (which is how I believe results passing as inputs is accomplished), I can't seem to find an example of commands that use filter_complexs flowing into other filter_complex commands (nested filter-complex commands?). In my example, two distinct inputs are required, resulting in one output.
/*
Brighten & increase saturation of original image
Remove white shape from black background silhouette, leaving a transparent shape
Overlay black background silhouette over brightened image. Creating a focus point
*/
ffmpeg -i OrigionalImage.png -vf eq=brightness=0.06:saturation=2 -c:a copy BrightenedImage.png
ffmpeg -i WhiteSilhouette.png -filter_complex "[0]split[m][a]; [a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al]; [m][al]alphamerge" -c:a copy TransparentSilhouette.png
ffmpeg -i BrightenedImage.png -i TransparentSilhouette.png -filter_complex "[0:v][1:v] overlay=(W-w)/2:(H-h)/2" -c:a copy BrightnedSilhouette.png
Two original inputs and final output
Origional Image
White Silhouette
Brightned Silhouette
Use
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al];[m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png
You can also just invert WhiteSilouhette to generate the alpha,
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='255-lum(X,Y)',hue=s=0[al]; [m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png