I've pieced together 3 commands but my solution involves writing a number of tempory files. I would ultimately like to pipe the output of one command into the next command, without the temporary files.
Although many questions discuss filter-complex (which is how I believe results passing as inputs is accomplished), I can't seem to find an example of commands that use filter_complexs flowing into other filter_complex commands (nested filter-complex commands?). In my example, two distinct inputs are required, resulting in one output.
/*
Brighten & increase saturation of original image
Remove white shape from black background silhouette, leaving a transparent shape
Overlay black background silhouette over brightened image. Creating a focus point
*/
ffmpeg -i OrigionalImage.png -vf eq=brightness=0.06:saturation=2 -c:a copy BrightenedImage.png
ffmpeg -i WhiteSilhouette.png -filter_complex "[0]split[m][a]; [a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al]; [m][al]alphamerge" -c:a copy TransparentSilhouette.png
ffmpeg -i BrightenedImage.png -i TransparentSilhouette.png -filter_complex "[0:v][1:v] overlay=(W-w)/2:(H-h)/2" -c:a copy BrightnedSilhouette.png
Two original inputs and final output
Origional Image
White Silhouette
Brightned Silhouette
Use
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al];[m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png
You can also just invert WhiteSilouhette to generate the alpha,
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='255-lum(X,Y)',hue=s=0[al]; [m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png
Related
I have two recordings inside.MOV and outside.MOV recorded with two cameras (the same settings and the same model).
I want to do something like that:
ffmpeg -i inside.MOV -vf "transpose=2,transpose=2" inside_rotated180degree.MOV # rotate 180 degree
ffmpeg -i outside.MOV -i inside_rotated180degree.MOV \
-filter_complex "[0:v]scale=640:-1[v0];[v0][1:v]vstack=inputs=2" inside_and_outside.MOV # concat them
but in single command.
I've managed to rotate upper video, but I need to rotate lower:
ffmpeg -i outside.MOV -i inside_rotated180degree.MOV \
-filter_complex "[0:v]scale=1920:-1,rotate=PI[v0];[v0][1:v]vstack=inputs=2" inside_and_outside.MOV
I tried to modify the command various ways to add rotate=PI, but always there is a error in command/screen/input/... does anybody knows how to rotate video lower instead of upper?
You just need to do the same prep work on both video streams (i.e., 2 filter chains) before stacking them together (the final chain):
ffmpeg -i outside.MOV -i inside.MOV \
-filter_complex "[0:v]scale=640:-1,hflip,vflip[v0];\
[1:v]scale=640:-1,hflip,vflip[v1];\
[v0][v1]vstack=inputs=2" inside_and_outside.MOV
transpose or rotate should work the same, I'm using yet another alternate hflip,vflip for illustration. Don't know which one is the fastest.
I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!
You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)
I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.
I have a square video from Snap Spectacles (1088x1088) that I want to overlay on itself zoomed in and blurred.
Example input frame:
Generated zoomed in and blurred background:
Desired output:
I think I can do this with ffmpeg's maskedmerge, but I'm having trouble finding examples.
There's an example of maskedmerge that merges two videos of the same size and dynamically removes a green screen, and another that merges videos with transparency.
Here's the closest I've been able to get:
ffmpeg -i background.jpg -vf "movie=input.jpg[inner];[in][inner] overlay=#{offset}:0 [out]" -c:a copy output.jpg
tl;dr: given the first two frames, how could I generate the third frame (as video)?
Got it!
Like #Mulvya recommended, I needed a circular mask:
Given that mask snapmask.png, a blurred square background video background.mov, and the original video 65B6354F61B4AF02_HD.MOV, they can be merged like this:
ffmpeg -i background.mov -loop 1 -i snapmask.png -filter_complex " \
[1:v]alphaextract, scale=1080:1080 [mask];\
movie=65B6354F61B4AF02_HD.MOV, scale=1080:1080 [original];\
[original][mask] alphamerge [masked];\
[0:v][masked] overlay=420:0;"\
-c:a copy output.mov
You can do one better, though, which is generating the blurred background video on the fly in the same command. Now the only inputs are the original spectacles round video and the circular mask:
ffmpeg -i 65B6354F61B4AF02_HD.MOV -loop 1 -i snapmask.png -filter_complex "\
[0:v]split[a][b];\
[1:v]alphaextract, scale=1080:1080[mask];\
[a]scale=1080:1080 [ascaled];\
[ascaled][mask]alphamerge[masked];\
[b]crop=946.56:532:70.72:278, boxblur=10:5,scale=1920:1080[background];\
[background][masked]overlay=420:0"\
-c:a copy 65B6354F61B4AF02_HD_sq.MOV
That crop=946.56:532:70.72:278 bit is what I found worked best to crop out a rectangular portion of the circular video to zoom into.
It took me a while to wrap my head around the ffmpeg filter system for how to do this, but it's not as scary as I'd initially thought. The basic syntax is [input]command args[output], and commands can be chained without explicitly naming their outputs (like in [1:v]alphaextract, scale=1080:1080[mask]).
I'm a bit puzzled here and can't find an answer to the following question. Is it possible to have 2 .png files watermarked into a video in a single command line with Libavfilter?
I'm using this commandline, but everything I try to get the second PNG image in it fails.
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" outputvideo.flv
This is certainly possible, and should look something like:
ffmpeg –i in.avi -vf "movie=logo1.png [logo1]; movie=logo2.png [logo2]; \
[in][logo1] overlay [tmp]; [tmp][logo2] overlay=50:50" out.flv
Both logo files are read in. One's overlaid at 0,0. Then the next is overlaid at 50,50 on the output from the first overlay filter.
Using more recent versions of FFmpeg, this command could be done slightly less verbosely like so:
ffmpeg -i in.avi -i logo1.png -i logo2.png -filter_complex "overlay [tmp]; \
[tmp] overlay=50:50" out.flv
The first overlay command overlays the first two inputs (in.avi and logo1.png), and the second automatically uses the third input (logo2.png) as its second input.