Overlaying one video on another one, and making black pixels transparent - ffmpeg

I'm trying to use FFMPEG to create a video with one video overlayed on top another.
I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.
I found two ways to overlay one video on another:
First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it:
ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y
And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y
My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.

The notional way to do this is to chroma-key the black out and then overlay, But as #MoDJ said, this likely won't produce satisfactory results. Neither will the method I suggest below, but it's worth a try.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(gt(lum(X,Y),16),255,0)',hue=s=0[al];
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
Above, I duplicate the overlay video stream, then use the geq filter to manipulate the luma values so that any pixel with luma greater than 16 (i.e. not pure black) has its luma set to white, else zero. Since I haven't provided expressions for the two color channels, geq falls back on the luma expression. We don't want that, so I use the hue filter to nullify those channels. Then I use the alphamerge filter to merge this as an alpha channel with the first copy of the overlay video. Then, the overlay. Like I said, this may not produce satisfactory results. You can tweak the value 16 in the geq filter to change the black threshold. Suggested range is 16-24 for limited-range (Y: 16-235) video files.

You will not be able to get a "replace black pixels" approach to work properly. What you actually want is a foreground video with a real alpha channel that can be manipulated and tested before doing an overlay on a background. For an extended example that describes the problems, please take a look at my blog post on the subject. When using FFMPEG, an easy way to import alpha channel video is to use Quicktime with the Animation codec video at 32 BPP.

Related

FFMpeg to resize any video to fit 1080x1920 vertical, without cropping, instead by shrinking and adding blurred borders?

I found some posts explaining how to turn any video horizontal by adding blurred borders using FFMpeg, but I want to convert videos to vertical 1080x1920. I don't want it to enlarge the video, nor crop if a dimension is bigger than either 1080 or 1920 dimension. Instead, I want it to shrink the video until it fits fully inside 1080x1920, and then I want it to add blurred borders to the empty areas.
This is the snippet I found, but when I tried reversing the numbers, it actually cropped the video.
ffmpeg -I input.mp4 -lavfi "[0:v]scale=1920*2:1080*2,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[0:v]scale=-1:1080[ov];[bg][ov]overlay=(W-w)/2:(H-h)/2,crop=w=1920:h=1080" output.mp4
Simple method:
ffmpeg -i input.mp4 -filter_complex "[0:v]boxblur=40,scale=1080x1920,setsar=1[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=y=(H-h)/2" -c:a copy output.mp4
"Simple" because it forces the background to 1080x1920 and ignores aspect ratio. So the background it will looked stretched, but it is blurred so much nobody will care or notice.

FFmpeg film grain

I want to add a film grain effect using FFMPEG if possible.
Taking a nice clean computer rendered scene and filter for a gritty black and white 16mm film look. As an example something like Clerks https://www.youtube.com/watch?v=Mlfn5n-E2WE
According to Simulating TV noise Ishould be able to use the following filter
-filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)"
but when I add it to my ffmpeg command
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y OUTPUT.AVI
so the command is now
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y -filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" OUTPUT.AVI
I get the message
Filtergraph 'format=yuv420p' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
How can I change my ffmpeg command line so the grain filter works? Additionally, can I add a slight blur too? The old 16mm looks more like blurred then grainy.
Thanks for any tips.
I just needed to make a film grain and wanted something "neater" than just randomizing every pixel. Here's what I came up with: FFmpeg film grain.
It starts with white noise:
Then it uses the "deflate" and "dilation" filters to cause certain features to expand out to multiple pixels:
The effect is pretty subtle but you can see that there are a few larger "blobs" of white and black in amongst the noise. This means that the features of the noise aren't just straight-up single pixels any more. Then, that image gets halved in resolution, because it was being rendered at twice the resolution of the target video.
The highest-resolution detail is now softened, and the clumps of pixels are reduced in size to be 1-2 pixels in size. So, this is the noise plane.
Then, I take the source video and do some processing on it.
Desaturate:
Filter luminance so that the closer an input pixel was to luminance level 75 (arrived at experimentally), the brighter the pixel is. If the input pixel was darker or brighter, the output pixel is uniformly darker. This creates "bands" of brightness where the luminance level is close to 75.
This is then scaled down, and this is where the level of noise is "tuned". This band selection means that we will be adding noise specifically in the areas of the frame where it will be most noticed. Not adding noise in other areas leaves more bits to encode the noise.
This scaled mask is then applied to the previously-computed noise. In this screenshot, I've removed the tuning so that the noise is easily visible:
The areas not selected by the band filter are greatly scaled down and are essentially black; the noise variation fades to nothing.
Here's what it looks like with a scaling factor of 0.32 -- pretty subtle:
I then invert this image, so that the parts with no noise are solid white, and then areas with noise pull down slightly from the white:
Finally, I pull another copy of the same source video, apply this computed image to it as an alpha channel and overlay it on black, so that the film grain dots, which are slightly less white, become slightly darker pixels.
The effect is pretty subtle, hard to see in a still like that when it's not moving, but if you tune the noise way up, you can get frames like this:
The filters "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" is for white noise
For "a gritty black and white 16mm film look", you want something like instead,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t
The format you specified is a filter, and all filters applied on an input should be specified in a single chain, so it should be,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t,format=yuv420p
See filter docs at https://ffmpeg.org/ffmpeg-filters.html for descriptions and list of parameters you can tweak.

using ffmpeg to create a wavefile image from opus

I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.

What is the variable "a" in ffmpeg?

In using the scale filter with ffmpeg, I see many examples similar to this:
ffmpeg -i input.mov -vf scale="'if(gt(a,4/3),320,-2)':'if(gt(a,4/3),-2,240)'" output.mov
What does the variable a signify?
From the ffmpeg scale options docs.
a The same as iw / ih
where
iw Input Width ih Input Height
My guess after reading https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg is that a is the aspect ratio of the input file.
The example given on the webpage gives you an idea how to use it:
Sometimes there is a need to scale the input image in such way it fits
into a specified rectangle, i.e. if you have a placeholder (empty
rectangle) in which you want to scale any given image. This is a
little bit tricky, since you need to check the original aspect ratio,
in order to decide which component to specify and to set the other
component to -1 (to keep the aspect ratio). For example, if we would
like to scale our input image into a rectangle with dimensions of
320x240, we could use something like this:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'"
output_320x240_boxed.png
In the ffmpeg wiki "Scaling (resizing) with ffmpeg", they use this example:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'" output.png
The purpose of the gt(a,4/3) is, as far as I can tell, to determine the orientation (portrait or landscape) of the video (or image, in this case).
This wouldn't work for some strange aspect ratios (7:6, for an example, where gt(a,4/3) would incorrectly turn false.
It seems to me better to use the height and width of the video, so the above line would instead be:
ffmpeg -i input.jpg -vf scale="'if(gt(iw,ih),320,-1)':'if(gt(iw,ih),-1,240)'" output.png

ffmpeg: thumbnail of frame, preserve aspect ratio, apply background / padding / fill colour

I already have found out how to scale the thumbnail to stay within specified bounding dimensions while maintaining aspect ratio. For example, to get the frame shown at 6 seconds into the input.mp4 video file, and scale it to fit into 96x60 (16:10 aspect ratio):
ffmpeg -y -i input.mp4 -ss 6 -vframes 1 -vf scale="'if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)'" output.png
This is fine, it works.
Next, I would like to do the same, but if the video's aspect ratio is not exactly 16:10, then I would like to force the output image to have an aspect ratio of 16:10 by taking the above transformation, and filling or padding the space with white. That is, I want the output to be as if I took, say, a 96x48 image, and laid it over a 96x60 white background, resulting in white bars above and below the 96x48 image.
Ideally, I do not want to resort to using another tool or library, such as ImageMagick. It would be best if ffmpeg could do this on its own.
Here's what I went with. For the -vf argument:
-vf "scale='if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)', pad=w=96:h=60:x=(ow-iw)/2:y=(oh-ih)/2:color=white"
This applies two filters in sequence, separated by a comma.
target_H = 2436
target_W = 1124
ffmpeg -i 1.mp4 -ss 1 -vframes 1 -vf "scale=min(iw*2436/ih\,1124):min(2436\,ih*1124/iw),pad=1124:2436:(1124-iw)/2:(2436-ih)/2:green" output.png

Resources