FFmpeg film grain - ffmpeg

I want to add a film grain effect using FFMPEG if possible.
Taking a nice clean computer rendered scene and filter for a gritty black and white 16mm film look. As an example something like Clerks https://www.youtube.com/watch?v=Mlfn5n-E2WE
According to Simulating TV noise Ishould be able to use the following filter
-filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)"
but when I add it to my ffmpeg command
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y OUTPUT.AVI
so the command is now
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y -filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" OUTPUT.AVI
I get the message
Filtergraph 'format=yuv420p' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
How can I change my ffmpeg command line so the grain filter works? Additionally, can I add a slight blur too? The old 16mm looks more like blurred then grainy.
Thanks for any tips.

I just needed to make a film grain and wanted something "neater" than just randomizing every pixel. Here's what I came up with: FFmpeg film grain.
It starts with white noise:
Then it uses the "deflate" and "dilation" filters to cause certain features to expand out to multiple pixels:
The effect is pretty subtle but you can see that there are a few larger "blobs" of white and black in amongst the noise. This means that the features of the noise aren't just straight-up single pixels any more. Then, that image gets halved in resolution, because it was being rendered at twice the resolution of the target video.
The highest-resolution detail is now softened, and the clumps of pixels are reduced in size to be 1-2 pixels in size. So, this is the noise plane.
Then, I take the source video and do some processing on it.
Desaturate:
Filter luminance so that the closer an input pixel was to luminance level 75 (arrived at experimentally), the brighter the pixel is. If the input pixel was darker or brighter, the output pixel is uniformly darker. This creates "bands" of brightness where the luminance level is close to 75.
This is then scaled down, and this is where the level of noise is "tuned". This band selection means that we will be adding noise specifically in the areas of the frame where it will be most noticed. Not adding noise in other areas leaves more bits to encode the noise.
This scaled mask is then applied to the previously-computed noise. In this screenshot, I've removed the tuning so that the noise is easily visible:
The areas not selected by the band filter are greatly scaled down and are essentially black; the noise variation fades to nothing.
Here's what it looks like with a scaling factor of 0.32 -- pretty subtle:
I then invert this image, so that the parts with no noise are solid white, and then areas with noise pull down slightly from the white:
Finally, I pull another copy of the same source video, apply this computed image to it as an alpha channel and overlay it on black, so that the film grain dots, which are slightly less white, become slightly darker pixels.
The effect is pretty subtle, hard to see in a still like that when it's not moving, but if you tune the noise way up, you can get frames like this:

The filters "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" is for white noise
For "a gritty black and white 16mm film look", you want something like instead,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t
The format you specified is a filter, and all filters applied on an input should be specified in a single chain, so it should be,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t,format=yuv420p
See filter docs at https://ffmpeg.org/ffmpeg-filters.html for descriptions and list of parameters you can tweak.

Related

FFMpeg to resize any video to fit 1080x1920 vertical, without cropping, instead by shrinking and adding blurred borders?

I found some posts explaining how to turn any video horizontal by adding blurred borders using FFMpeg, but I want to convert videos to vertical 1080x1920. I don't want it to enlarge the video, nor crop if a dimension is bigger than either 1080 or 1920 dimension. Instead, I want it to shrink the video until it fits fully inside 1080x1920, and then I want it to add blurred borders to the empty areas.
This is the snippet I found, but when I tried reversing the numbers, it actually cropped the video.
ffmpeg -I input.mp4 -lavfi "[0:v]scale=1920*2:1080*2,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[0:v]scale=-1:1080[ov];[bg][ov]overlay=(W-w)/2:(H-h)/2,crop=w=1920:h=1080" output.mp4
Simple method:
ffmpeg -i input.mp4 -filter_complex "[0:v]boxblur=40,scale=1080x1920,setsar=1[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=y=(H-h)/2" -c:a copy output.mp4
"Simple" because it forces the background to 1080x1920 and ignores aspect ratio. So the background it will looked stretched, but it is blurred so much nobody will care or notice.

Trying to convert multiple images into a video [duplicate]

I am trying to encode a .mp4 video from a set of frames using FFMPEG using the libx264 codec.
This is the command I am running:
/usr/local/bin/ffmpeg -r 24 -i frame_%05d.jpg -vcodec libx264 -y -an video.mp4
I sometimes get the following error:
[libx264 # 0xa3b85a0] height not divisible by 2 (520x369)
After searching around a bit it seems that the issue has something to do with the scaling algorithm and can be fixed by adding a -vf argument.
However, in my case I don't want to do any scaling. Ideally, I want to keep the dimensions exactly the same as the frames. Any advice? Is there some sort of aspect ratio that h264 enforces?
The answer to the original question should not scale the video but instead fix the height not divisible by 2 error. This can be achieve using this filter:
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2"
Full command:
ffmpeg -i frame_%05d.jpg -vcodec libx264 \
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -r 24 \
-y -an video.mp4
Basically, .h264 needs even dimensions so this filter will:
Divide the original height and width by 2
Round it up to the nearest pixel
Multiply it by 2 again, thus making it an even number
Add black padding pixels up to this number
You can change the color of the padding by adding filter parameter :color=white. See the documentation of pad.
For width and height
Make width and height divisible by 2 with the crop filter:
ffmpeg -i input.mp4 -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4
If you want to scale instead of crop change crop to scale.
For width or height
Using the scale filter. This will make width 1280. Height will be automatically calculated to preserve the aspect ratio, and the width will be divisible by 2:
ffmpeg -i input.mp4 -vf scale=1280:-2 output.mp4
Similar to above, but make height 720 and automatically calculate width:
ffmpeg -i input.mp4 -vf scale=-2:720 output.mp4
You can't use -2 for both width and height, but if you already specified one dimension then using -2 is a simple solution.
If you want to set some output width and have output with the same ratio as original
scale=720:-1
and not to fall with this problem then you can use
scale="720:trunc(ow/a/2)*2"
(Just for people searching how to do that with scaling)
The problem with the scale solutions here is that they distort the source image/video which is almost never what you want.
Instead, I've found the best solution is to add a 1-pixel pad to the odd dimension. (By default, the pading is black and hard to notice.)
The problem with the other pad solutions is that they do not generalize over arbitrary dimensions because they always pad.
This solution only adds a 1-pixel pad to height and/or width if they are odd:
-vf pad="width=ceil(iw/2)*2:height=ceil(ih/2)*2"
This is ideal because it always does the right thing even when no padding is necessary.
It's likely due to the the fact that H264 video is usually converted from RGB to YUV space as 4:2:0 prior to applying compression (although the format conversion itself is a lossy compression algorithm resulting in 50% space savings).
YUV-420 starts with an RGB (Red Green Blue) picture and converts it into YUV (basically one intensity channel and two "hue" channels). The Hue channels are then subsampled by creating one hue sample for every 2X2 square of that hue.
If you have an odd number of RGB pixels either horizontally or vertically, you will have incomplete data for the last pixel column or row in the subsampled hue space of the YUV frame.
LordNeckbeard has the right answer, very fast
-vf scale=1280:-2
For android, dont forget add
"-preset ultrafast" and|or "-threads n"
You may also use bitand function instead of trunc:
bitand(x, 65534)
will do the same as trunc(x/2)*2 and it is more transparent in my opinion.
(Consider 65534 a magical number here ;) )
My task was to scale automatically a lot of video files to half resolution.
scale=-2,ih/2 lead to slightly blurred images
reason:
input videos had their display aspect ratio (DAR) set
scale scales the real frame dimensions
during preview the new videos' sizes have to be corrected using DAR which in case of quite low-resoution video (360x288, DAR 16:9) may lead to blurring
solution:
-vf "scale='bitand(oh*dar, 65534)':'bitand(ih/2, 65534)', setsar=1"
explanation:
output_height = input_height / 2
output_width = output_height * original_display_aspect_ratio
both output_width and output_height are now rounded to nearest smaller number divisible by 2
setsar=1 means output_dimensions are now final, no aspect ratio correction should be applied
Someone might find this helpful.

perspective correction example

I have some videos taken of a display, with the camera not perfectly oriented, so that the result shows a strong trapezoidal effect.
I know that there is a perspective filter in ffmpeg https://ffmpeg.org/ffmpeg-filters.html#perspective, but I'm too dumb to understand how it works from the docs - and I cannot find a single example.
Somebody can show me how it works?
The following example extracts a trapezoidal perspective section from an input Matroska video to an output video.
An estimated coordinate had to be inserted to complete the trapezoidal pattern (out-of-frame coordinate x2=-60,y2=469).
Input video frame was 1280x720. Pixel interpolation was specified linear, however that is the default if not specified at all. Cubic interpolation bloats the output with NO apparent improvement in video quality. Output video frame size will be of the input video's frame size.
Video output was viewable but rough quality due to sampling error.
ffmpeg -hide_banner -i input.mkv -lavfi "perspective=x0=225:y0=0:x1=715:y1=385:x2=-60:y2=469:x3=615:y3=634:interpolation=linear" output.mkv
You can also make use of ffplay (or any player which lets you access ffmpeg filters, like mpv) to preview the effect, or if you want to keystone-correct a display surface.
For example, if you have your TV above your fireplace mantle and you're sitting on the floor looking up at it, this will un-distort the image to a large extent:
ffplay video.mkv -vf 'perspective=W*.1:0:W*.9:0:-W*.1:H:W*1.1:H'
The above expands the top by 20% and compresses the bottom by 20%, cropping the top and infilling the bottom with the edge pixels.
Also handy for playing back video of a building you're standing in front of with the camera pointed up around 30 degrees.

Overlaying one video on another one, and making black pixels transparent

I'm trying to use FFMPEG to create a video with one video overlayed on top another.
I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.
I found two ways to overlay one video on another:
First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it:
ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y
And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y
My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.
The notional way to do this is to chroma-key the black out and then overlay, But as #MoDJ said, this likely won't produce satisfactory results. Neither will the method I suggest below, but it's worth a try.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(gt(lum(X,Y),16),255,0)',hue=s=0[al];
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
Above, I duplicate the overlay video stream, then use the geq filter to manipulate the luma values so that any pixel with luma greater than 16 (i.e. not pure black) has its luma set to white, else zero. Since I haven't provided expressions for the two color channels, geq falls back on the luma expression. We don't want that, so I use the hue filter to nullify those channels. Then I use the alphamerge filter to merge this as an alpha channel with the first copy of the overlay video. Then, the overlay. Like I said, this may not produce satisfactory results. You can tweak the value 16 in the geq filter to change the black threshold. Suggested range is 16-24 for limited-range (Y: 16-235) video files.
You will not be able to get a "replace black pixels" approach to work properly. What you actually want is a foreground video with a real alpha channel that can be manipulated and tested before doing an overlay on a background. For an extended example that describes the problems, please take a look at my blog post on the subject. When using FFMPEG, an easy way to import alpha channel video is to use Quicktime with the Animation codec video at 32 BPP.

ffmpeg: thumbnail of frame, preserve aspect ratio, apply background / padding / fill colour

I already have found out how to scale the thumbnail to stay within specified bounding dimensions while maintaining aspect ratio. For example, to get the frame shown at 6 seconds into the input.mp4 video file, and scale it to fit into 96x60 (16:10 aspect ratio):
ffmpeg -y -i input.mp4 -ss 6 -vframes 1 -vf scale="'if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)'" output.png
This is fine, it works.
Next, I would like to do the same, but if the video's aspect ratio is not exactly 16:10, then I would like to force the output image to have an aspect ratio of 16:10 by taking the above transformation, and filling or padding the space with white. That is, I want the output to be as if I took, say, a 96x48 image, and laid it over a 96x60 white background, resulting in white bars above and below the 96x48 image.
Ideally, I do not want to resort to using another tool or library, such as ImageMagick. It would be best if ffmpeg could do this on its own.
Here's what I went with. For the -vf argument:
-vf "scale='if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)', pad=w=96:h=60:x=(ow-iw)/2:y=(oh-ih)/2:color=white"
This applies two filters in sequence, separated by a comma.
target_H = 2436
target_W = 1124
ffmpeg -i 1.mp4 -ss 1 -vframes 1 -vf "scale=min(iw*2436/ih\,1124):min(2436\,ih*1124/iw),pad=1124:2436:(1124-iw)/2:(2436-ih)/2:green" output.png

Resources