What is the variable "a" in ffmpeg? - ffmpeg

In using the scale filter with ffmpeg, I see many examples similar to this:
ffmpeg -i input.mov -vf scale="'if(gt(a,4/3),320,-2)':'if(gt(a,4/3),-2,240)'" output.mov
What does the variable a signify?

From the ffmpeg scale options docs.
a The same as iw / ih
where
iw Input Width ih Input Height

My guess after reading https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg is that a is the aspect ratio of the input file.
The example given on the webpage gives you an idea how to use it:
Sometimes there is a need to scale the input image in such way it fits
into a specified rectangle, i.e. if you have a placeholder (empty
rectangle) in which you want to scale any given image. This is a
little bit tricky, since you need to check the original aspect ratio,
in order to decide which component to specify and to set the other
component to -1 (to keep the aspect ratio). For example, if we would
like to scale our input image into a rectangle with dimensions of
320x240, we could use something like this:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'"
output_320x240_boxed.png

In the ffmpeg wiki "Scaling (resizing) with ffmpeg", they use this example:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'" output.png
The purpose of the gt(a,4/3) is, as far as I can tell, to determine the orientation (portrait or landscape) of the video (or image, in this case).
This wouldn't work for some strange aspect ratios (7:6, for an example, where gt(a,4/3) would incorrectly turn false.
It seems to me better to use the height and width of the video, so the above line would instead be:
ffmpeg -i input.jpg -vf scale="'if(gt(iw,ih),320,-1)':'if(gt(iw,ih),-1,240)'" output.png

Related

Trying to convert multiple images into a video [duplicate]

I am trying to encode a .mp4 video from a set of frames using FFMPEG using the libx264 codec.
This is the command I am running:
/usr/local/bin/ffmpeg -r 24 -i frame_%05d.jpg -vcodec libx264 -y -an video.mp4
I sometimes get the following error:
[libx264 # 0xa3b85a0] height not divisible by 2 (520x369)
After searching around a bit it seems that the issue has something to do with the scaling algorithm and can be fixed by adding a -vf argument.
However, in my case I don't want to do any scaling. Ideally, I want to keep the dimensions exactly the same as the frames. Any advice? Is there some sort of aspect ratio that h264 enforces?
The answer to the original question should not scale the video but instead fix the height not divisible by 2 error. This can be achieve using this filter:
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2"
Full command:
ffmpeg -i frame_%05d.jpg -vcodec libx264 \
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -r 24 \
-y -an video.mp4
Basically, .h264 needs even dimensions so this filter will:
Divide the original height and width by 2
Round it up to the nearest pixel
Multiply it by 2 again, thus making it an even number
Add black padding pixels up to this number
You can change the color of the padding by adding filter parameter :color=white. See the documentation of pad.
For width and height
Make width and height divisible by 2 with the crop filter:
ffmpeg -i input.mp4 -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4
If you want to scale instead of crop change crop to scale.
For width or height
Using the scale filter. This will make width 1280. Height will be automatically calculated to preserve the aspect ratio, and the width will be divisible by 2:
ffmpeg -i input.mp4 -vf scale=1280:-2 output.mp4
Similar to above, but make height 720 and automatically calculate width:
ffmpeg -i input.mp4 -vf scale=-2:720 output.mp4
You can't use -2 for both width and height, but if you already specified one dimension then using -2 is a simple solution.
If you want to set some output width and have output with the same ratio as original
scale=720:-1
and not to fall with this problem then you can use
scale="720:trunc(ow/a/2)*2"
(Just for people searching how to do that with scaling)
The problem with the scale solutions here is that they distort the source image/video which is almost never what you want.
Instead, I've found the best solution is to add a 1-pixel pad to the odd dimension. (By default, the pading is black and hard to notice.)
The problem with the other pad solutions is that they do not generalize over arbitrary dimensions because they always pad.
This solution only adds a 1-pixel pad to height and/or width if they are odd:
-vf pad="width=ceil(iw/2)*2:height=ceil(ih/2)*2"
This is ideal because it always does the right thing even when no padding is necessary.
It's likely due to the the fact that H264 video is usually converted from RGB to YUV space as 4:2:0 prior to applying compression (although the format conversion itself is a lossy compression algorithm resulting in 50% space savings).
YUV-420 starts with an RGB (Red Green Blue) picture and converts it into YUV (basically one intensity channel and two "hue" channels). The Hue channels are then subsampled by creating one hue sample for every 2X2 square of that hue.
If you have an odd number of RGB pixels either horizontally or vertically, you will have incomplete data for the last pixel column or row in the subsampled hue space of the YUV frame.
LordNeckbeard has the right answer, very fast
-vf scale=1280:-2
For android, dont forget add
"-preset ultrafast" and|or "-threads n"
You may also use bitand function instead of trunc:
bitand(x, 65534)
will do the same as trunc(x/2)*2 and it is more transparent in my opinion.
(Consider 65534 a magical number here ;) )
My task was to scale automatically a lot of video files to half resolution.
scale=-2,ih/2 lead to slightly blurred images
reason:
input videos had their display aspect ratio (DAR) set
scale scales the real frame dimensions
during preview the new videos' sizes have to be corrected using DAR which in case of quite low-resoution video (360x288, DAR 16:9) may lead to blurring
solution:
-vf "scale='bitand(oh*dar, 65534)':'bitand(ih/2, 65534)', setsar=1"
explanation:
output_height = input_height / 2
output_width = output_height * original_display_aspect_ratio
both output_width and output_height are now rounded to nearest smaller number divisible by 2
setsar=1 means output_dimensions are now final, no aspect ratio correction should be applied
Someone might find this helpful.

Overlaying one video on another one, and making black pixels transparent

I'm trying to use FFMPEG to create a video with one video overlayed on top another.
I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.
I found two ways to overlay one video on another:
First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it:
ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y
And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y
My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.
The notional way to do this is to chroma-key the black out and then overlay, But as #MoDJ said, this likely won't produce satisfactory results. Neither will the method I suggest below, but it's worth a try.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(gt(lum(X,Y),16),255,0)',hue=s=0[al];
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
Above, I duplicate the overlay video stream, then use the geq filter to manipulate the luma values so that any pixel with luma greater than 16 (i.e. not pure black) has its luma set to white, else zero. Since I haven't provided expressions for the two color channels, geq falls back on the luma expression. We don't want that, so I use the hue filter to nullify those channels. Then I use the alphamerge filter to merge this as an alpha channel with the first copy of the overlay video. Then, the overlay. Like I said, this may not produce satisfactory results. You can tweak the value 16 in the geq filter to change the black threshold. Suggested range is 16-24 for limited-range (Y: 16-235) video files.
You will not be able to get a "replace black pixels" approach to work properly. What you actually want is a foreground video with a real alpha channel that can be manipulated and tested before doing an overlay on a background. For an extended example that describes the problems, please take a look at my blog post on the subject. When using FFMPEG, an easy way to import alpha channel video is to use Quicktime with the Animation codec video at 32 BPP.

using ffmpeg to create a wavefile image from opus

I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.

ffmpeg: thumbnail of frame, preserve aspect ratio, apply background / padding / fill colour

I already have found out how to scale the thumbnail to stay within specified bounding dimensions while maintaining aspect ratio. For example, to get the frame shown at 6 seconds into the input.mp4 video file, and scale it to fit into 96x60 (16:10 aspect ratio):
ffmpeg -y -i input.mp4 -ss 6 -vframes 1 -vf scale="'if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)'" output.png
This is fine, it works.
Next, I would like to do the same, but if the video's aspect ratio is not exactly 16:10, then I would like to force the output image to have an aspect ratio of 16:10 by taking the above transformation, and filling or padding the space with white. That is, I want the output to be as if I took, say, a 96x48 image, and laid it over a 96x60 white background, resulting in white bars above and below the 96x48 image.
Ideally, I do not want to resort to using another tool or library, such as ImageMagick. It would be best if ffmpeg could do this on its own.
Here's what I went with. For the -vf argument:
-vf "scale='if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)', pad=w=96:h=60:x=(ow-iw)/2:y=(oh-ih)/2:color=white"
This applies two filters in sequence, separated by a comma.
target_H = 2436
target_W = 1124
ffmpeg -i 1.mp4 -ss 1 -vframes 1 -vf "scale=min(iw*2436/ih\,1124):min(2436\,ih*1124/iw),pad=1124:2436:(1124-iw)/2:(2436-ih)/2:green" output.png

Maintaining aspect ratio with FFmpeg

I need to convert a bunch of video files using FFmpeg. I run a Bash file that converts all the files nicely, however there is a problem if a file converted is not in 16:9 format.
As I am fixing the size of the screen to -s 720x400, if the aspect ratio of the original is 4:3, FFmpeg creates a 16:9 output file, screwing up the aspect ratio.
Is there a setting that allows setting an aspect ratio as the main parameter, with size being adjusted (for example, by fixing an X or Y dimension only)?
-vf "scale=640:-1"
works great until you will encounter error
[libx264 # 0x2f08120] height not divisible by 2 (640x853)
So most generic approach is use filter expressions:
scale=640:trunc(ow/a/2)*2
It takes output width (ow), divides it by aspect ratio (a), divides by 2, truncates digits after decimal point and multiplies by 2. It guarantees that resulting height is divisible by 2.
Credits to ffmpeg trac
UPDATE
As comments pointed out simpler way would be to use -vf "scale=640:-2".
Credits to #BradWerth for elegant solution
For example:
1920x1080 aspect ratio 16:9 => 640x480 aspect 4:3:
ffmpeg -y -i import.media -aspect 16:9 scale=640x360,pad=640:480:0:60:black output.media
aspect ratio 16:9 , size width 640pixel => height 360pixel:
With final output size 640x480, and pad 60pixel black image (top and bottom):
"-vf scale=640x360,pad=640:480:0:60:black"
I've asked this a long time ago, but I've actually got a solution which was not known to me at the time -- in order to keep the aspect ratio, you should use the video filter scale, which is a very powerful filter.
You can simply use it like this:
-vf "scale=640:-1"
Which will fix the width and supply the height required to keep the aspect ratio. But you can also use many other options and even mathematical functions, check the documentation here - http://ffmpeg.org/ffmpeg.html#scale
Although most of these answers are great, I was looking for a command that could resize to a target dimension (width or height) while maintaining aspect ratio. I was able to accomplish this using ffmpeg's Expression Evaluation.
Here's the relevant video filter, with a target dimension of 512:
-vf "thumbnail,scale='if(gt(iw,ih),512,trunc(oh*a/2)*2)':'if(gt(iw,ih),trunc(ow/a/2)*2,512)'"
For the output width:
'if(gt(iw,ih),512,trunc(oh*a/2)*2)'
If width is greater than height, return the target, otherwise, return the proportional width.
For the output height:
'if(gt(iw,ih),trunc(ow/a/2)*2,512)'
If width is greater than height, return the proportional height, otherwise, return the target.
Use force_original_aspect_ratio, from the ffmpeg trac:
ffmpeg -i input.mp4 -vf scale=720:400:force_original_aspect_ratio=decrease output.mp4
If you are trying to fit a bounding box, then using force_original_aspect_ratio as per xmedeko's answer is a good starting point.
However, this does not work if your input video has a weird size and you are encoding to a format that requires the dimensions to be divisible by 2, resulting in an error.
In this case, you can use expression evaluation in the scale function, like that used in Charlie's answer.
Assuming an output bounding box of 720x400:
-vf "scale='trunc(min(1,min(720/iw,400/ih))*iw/2)*2':'trunc(min(1,min(720/iw,400/ih))*ih/2)*2'"
To break this down:
min(1,min(720/iw,400/ih) finds the scaling factor to fit within the bounding box (from here), constraining it to a maximum of 1 to ensure it only downscales, and
trunc(<scaling factor>*iw/2)*2 and trunc(<scaling factor>*iw/2)*2 ensure that the dimensions are divisible by 2 by dividing by 2, making the result an integer, then multiplying it back by 2.
This eliminates the need for finding the dimensions of the input video prior to encoding.
As ffmpeg requires to have width/height dividable by 2,
and I suppose you want to specify one of the dimensions, this would be the option:
ffmpeg -i input.mp4 -vf scale=1280:-2 output.mp4
you can use ffmpeg -i to get the dimensions of the original file, and use that in your commands for the encode. What platform are you using ffmpeg on?
If '-aspect x:y' is present and output file format is ISO Media File Format (mp4) then ffmpeg adds pasp-atom (PixelAspectRatioBox) into stsd-box in the video track to indicate to players the expected
aspect ratio. Players should scale video frames respectively.
Not needed to scale video before encoding or transcoding to fit it to the aspect ratio, it should be performed by a player.
The above answers are great, but most of them assume specific video dimensions and don't operate on a generic aspect ratio.
You can pad the video to fit any aspect ratio, regardless of specific dimensions, using this:
-vf 'pad=x=(ow-iw)/2:y=(oh-ih)/2:aspect=16/9'
I use the ratio 16/9 in my example. The above is a shortcut for just doing something more manual like this:
pad='max(iw,(16/9)*ih)':'max(ih,iw/(16/9))':(ow-iw)/2:(oh-ih)/2
That might output odd-sized (not even) video dimensions, so you can make sure the output is even like this:
pad='trunc(max(iw,(16/9)*ih)/2)*2':'trunc(max(ih,iw/(16/9))/2)*2':(ow-iw)/2:(oh-ih)/2
But really all you need is pad=x=(ow-iw)/2:y=(oh-ih)/2:aspect=16/9
For all of the above examples you'll get an error if the INPUT video has odd-sized dimensions. Even pad=iw:ih gives error if the input is odd-sized. Normally you wouldn't ever have odd-sized input, but if you do you can fix it by first using this filter: pad='mod(iw,2)+iw':'mod(ih,2)+ih'

Resources