My videos are 1920x1080 recorded with high ISO (3200) using smartphone (to get bright view, backlight scene mode). It produce a lot of noise. I try many video filter but all of them produce blur similar to when we reduce the resolution in half then increase it back again.
Is there a good video noise filter that only remove noise without producing blur?
Because if it produce blur, I would prefer to not do any filtering at all.
I have tried video filter:
nlmeans=s=30:r=3:p=1
vaguedenoiser=threshold=22:percent=100:nsteps=4
owdenoise=8:6:6
hqdn3d=100:0:50:0
bm3d=sigma=30:block=4:bstep=8:group=1:range=8:mstep=64:thmse=0:hdthr=5:estim=basic:planes=1
dctdnoiz=sigma=30:n=4
fftdnoiz=30:1:6:0.8
All produce blur, some even worse. I have to use strong setting to make the noise moderately removed. I end up halving the resolution and use remove grain then scale it up again. This is much better for me than all the above method (pp filter is used to reduce size without reducing image detail):
scale=960:540,removegrain=3:0:0:0,pp=dr/fq|8,scale=1920:1080
code example
FOR %%G IN (*.jpg) DO "ffmpeg.exe" -y -i "%%G" -vf "nlmeans=s=30:r=3:p=1" -qmin 1 -qmax 1 -q:v 1 "%%G.jpg"
Part of the image
The image:
To help with blur, I always use unsharp to sharpen the image after nlmeans. Below are the parameters I find work best on old grainy movies, or 4K transfers of old movies that create unacceptable grain. It seems to work quite well. For 4K movies, it almost makes them as good as the 1080p Blu Ray versions.
nlmeans=s=1:p=7:pc=5:r=3:p=3
unsharp=7:7:2.5
My video is very noisy temporally. The video was taken under low light conditions at a high frame rate.
Currently I've tried
ffplay -flags2 +export_mvs -i test.mp4 -vf edgedetect=low=0.05:high=0.17,hqdn3d=4.0:3.0:6.0:4.5,codecview=mv=pf+bf+bb,"lutyuv=y='if(lt(val,19),0,val)'
The motion vectors are tracking noise as in the near dark areas the vectors varying greatly in magnitude and angle.
How do I decimate or filter the display motion vectors based on magnitude and/or location?
Remember that codecview will display the motion vectors from the encoded file, so if you denoise that file after decoding (such as ffplay [..] -vf hqdn3d), then the motion vectors aren't actually affected by the denoising, because they come from an earlier part in the pipeline.
To change the motion vectors in the compressed file, you need to re-encode it and denoise/degrain before encoding. I don't remember if there's a way to generate motion vectors (post-decoding) within the filter chain.
I'm trying to remove the TOP AND BOTTOM black bars of a video.
Image sample from video
What i'm trying to achieve
The video itself is 1280x720 16:9, but the portion that's image information is at 4:3 since it's been captured from a VHS. I want to somehow stretch it until the top bars disappear without deforming the image. I don't care about the left and right bars.
I tried using crop and scale with no luck.
By using this code the top and bottom black bars disappeared on VLC when on normal screen but when going Full Screen the bars appeared again.
ffmpeg -i test.avi -filter:v "crop=1280:670" output_video.mp4
I thought it had something to do with the Scale of the video but honestly every scale code I tried to use deformed the image a lot.
I hope someone can help me, fairly new to FFMPEG but really enjoying it this far.
I got your image, resized it to 720p, made a 30 second video to test.
In my example I've also cropped the edges (left/right) because as #LordNeckbeard mentioned, when they hit the side of your screen, they may prevent the top/bottom of the video from reaching the top/bottom of the screen, which will again, look like black bars at the top/bottom, whether they are there or not.
This worked for me:
ffmpeg -y -hide_banner -i "test.avi" -filter:v "crop=iw-400:ih-40,scale=960:720" -pix_fmt yuv420p output_video.mp4
Quick explanation:
crop=iw-400:ih-40
Cropping 400 from the input width (iw) (2x200 left/right)
Cropping 40 from the input height (ih) (2x20 top/bottom)
You can cut a little more off if you want a 'crisper' edge.
scale=960:720
Scaling the video slightly to bring it back to your original 720p, the 960 is to keep it at a nice 4x3 ratio.
This scaling is not needed, your preference.
Let me know if it worked for you.
I want to add a film grain effect using FFMPEG if possible.
Taking a nice clean computer rendered scene and filter for a gritty black and white 16mm film look. As an example something like Clerks https://www.youtube.com/watch?v=Mlfn5n-E2WE
According to Simulating TV noise Ishould be able to use the following filter
-filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)"
but when I add it to my ffmpeg command
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y OUTPUT.AVI
so the command is now
ffmpeg.exe -framerate 30 -i XYZ%05d.PNG -vf format=yuv420p -dst_range 1 -color_range 2 -c:v libxvid -vtag xvid -q:v 1 -y -filter_complex "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" OUTPUT.AVI
I get the message
Filtergraph 'format=yuv420p' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
How can I change my ffmpeg command line so the grain filter works? Additionally, can I add a slight blur too? The old 16mm looks more like blurred then grainy.
Thanks for any tips.
I just needed to make a film grain and wanted something "neater" than just randomizing every pixel. Here's what I came up with: FFmpeg film grain.
It starts with white noise:
Then it uses the "deflate" and "dilation" filters to cause certain features to expand out to multiple pixels:
The effect is pretty subtle but you can see that there are a few larger "blobs" of white and black in amongst the noise. This means that the features of the noise aren't just straight-up single pixels any more. Then, that image gets halved in resolution, because it was being rendered at twice the resolution of the target video.
The highest-resolution detail is now softened, and the clumps of pixels are reduced in size to be 1-2 pixels in size. So, this is the noise plane.
Then, I take the source video and do some processing on it.
Desaturate:
Filter luminance so that the closer an input pixel was to luminance level 75 (arrived at experimentally), the brighter the pixel is. If the input pixel was darker or brighter, the output pixel is uniformly darker. This creates "bands" of brightness where the luminance level is close to 75.
This is then scaled down, and this is where the level of noise is "tuned". This band selection means that we will be adding noise specifically in the areas of the frame where it will be most noticed. Not adding noise in other areas leaves more bits to encode the noise.
This scaled mask is then applied to the previously-computed noise. In this screenshot, I've removed the tuning so that the noise is easily visible:
The areas not selected by the band filter are greatly scaled down and are essentially black; the noise variation fades to nothing.
Here's what it looks like with a scaling factor of 0.32 -- pretty subtle:
I then invert this image, so that the parts with no noise are solid white, and then areas with noise pull down slightly from the white:
Finally, I pull another copy of the same source video, apply this computed image to it as an alpha channel and overlay it on black, so that the film grain dots, which are slightly less white, become slightly darker pixels.
The effect is pretty subtle, hard to see in a still like that when it's not moving, but if you tune the noise way up, you can get frames like this:
The filters "geq=random(1)*255:128:128;aevalsrc=-2+random(0)" is for white noise
For "a gritty black and white 16mm film look", you want something like instead,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t
The format you specified is a filter, and all filters applied on an input should be specified in a single chain, so it should be,
-vf hue=s=0,boxblur=lr=1.2,noise=c0s=7:allf=t,format=yuv420p
See filter docs at https://ffmpeg.org/ffmpeg-filters.html for descriptions and list of parameters you can tweak.
I need to convert a bunch of video files using FFmpeg. I run a Bash file that converts all the files nicely, however there is a problem if a file converted is not in 16:9 format.
As I am fixing the size of the screen to -s 720x400, if the aspect ratio of the original is 4:3, FFmpeg creates a 16:9 output file, screwing up the aspect ratio.
Is there a setting that allows setting an aspect ratio as the main parameter, with size being adjusted (for example, by fixing an X or Y dimension only)?
-vf "scale=640:-1"
works great until you will encounter error
[libx264 # 0x2f08120] height not divisible by 2 (640x853)
So most generic approach is use filter expressions:
scale=640:trunc(ow/a/2)*2
It takes output width (ow), divides it by aspect ratio (a), divides by 2, truncates digits after decimal point and multiplies by 2. It guarantees that resulting height is divisible by 2.
Credits to ffmpeg trac
UPDATE
As comments pointed out simpler way would be to use -vf "scale=640:-2".
Credits to #BradWerth for elegant solution
For example:
1920x1080 aspect ratio 16:9 => 640x480 aspect 4:3:
ffmpeg -y -i import.media -aspect 16:9 scale=640x360,pad=640:480:0:60:black output.media
aspect ratio 16:9 , size width 640pixel => height 360pixel:
With final output size 640x480, and pad 60pixel black image (top and bottom):
"-vf scale=640x360,pad=640:480:0:60:black"
I've asked this a long time ago, but I've actually got a solution which was not known to me at the time -- in order to keep the aspect ratio, you should use the video filter scale, which is a very powerful filter.
You can simply use it like this:
-vf "scale=640:-1"
Which will fix the width and supply the height required to keep the aspect ratio. But you can also use many other options and even mathematical functions, check the documentation here - http://ffmpeg.org/ffmpeg.html#scale
Although most of these answers are great, I was looking for a command that could resize to a target dimension (width or height) while maintaining aspect ratio. I was able to accomplish this using ffmpeg's Expression Evaluation.
Here's the relevant video filter, with a target dimension of 512:
-vf "thumbnail,scale='if(gt(iw,ih),512,trunc(oh*a/2)*2)':'if(gt(iw,ih),trunc(ow/a/2)*2,512)'"
For the output width:
'if(gt(iw,ih),512,trunc(oh*a/2)*2)'
If width is greater than height, return the target, otherwise, return the proportional width.
For the output height:
'if(gt(iw,ih),trunc(ow/a/2)*2,512)'
If width is greater than height, return the proportional height, otherwise, return the target.
Use force_original_aspect_ratio, from the ffmpeg trac:
ffmpeg -i input.mp4 -vf scale=720:400:force_original_aspect_ratio=decrease output.mp4
If you are trying to fit a bounding box, then using force_original_aspect_ratio as per xmedeko's answer is a good starting point.
However, this does not work if your input video has a weird size and you are encoding to a format that requires the dimensions to be divisible by 2, resulting in an error.
In this case, you can use expression evaluation in the scale function, like that used in Charlie's answer.
Assuming an output bounding box of 720x400:
-vf "scale='trunc(min(1,min(720/iw,400/ih))*iw/2)*2':'trunc(min(1,min(720/iw,400/ih))*ih/2)*2'"
To break this down:
min(1,min(720/iw,400/ih) finds the scaling factor to fit within the bounding box (from here), constraining it to a maximum of 1 to ensure it only downscales, and
trunc(<scaling factor>*iw/2)*2 and trunc(<scaling factor>*iw/2)*2 ensure that the dimensions are divisible by 2 by dividing by 2, making the result an integer, then multiplying it back by 2.
This eliminates the need for finding the dimensions of the input video prior to encoding.
As ffmpeg requires to have width/height dividable by 2,
and I suppose you want to specify one of the dimensions, this would be the option:
ffmpeg -i input.mp4 -vf scale=1280:-2 output.mp4
you can use ffmpeg -i to get the dimensions of the original file, and use that in your commands for the encode. What platform are you using ffmpeg on?
If '-aspect x:y' is present and output file format is ISO Media File Format (mp4) then ffmpeg adds pasp-atom (PixelAspectRatioBox) into stsd-box in the video track to indicate to players the expected
aspect ratio. Players should scale video frames respectively.
Not needed to scale video before encoding or transcoding to fit it to the aspect ratio, it should be performed by a player.
The above answers are great, but most of them assume specific video dimensions and don't operate on a generic aspect ratio.
You can pad the video to fit any aspect ratio, regardless of specific dimensions, using this:
-vf 'pad=x=(ow-iw)/2:y=(oh-ih)/2:aspect=16/9'
I use the ratio 16/9 in my example. The above is a shortcut for just doing something more manual like this:
pad='max(iw,(16/9)*ih)':'max(ih,iw/(16/9))':(ow-iw)/2:(oh-ih)/2
That might output odd-sized (not even) video dimensions, so you can make sure the output is even like this:
pad='trunc(max(iw,(16/9)*ih)/2)*2':'trunc(max(ih,iw/(16/9))/2)*2':(ow-iw)/2:(oh-ih)/2
But really all you need is pad=x=(ow-iw)/2:y=(oh-ih)/2:aspect=16/9
For all of the above examples you'll get an error if the INPUT video has odd-sized dimensions. Even pad=iw:ih gives error if the input is odd-sized. Normally you wouldn't ever have odd-sized input, but if you do you can fix it by first using this filter: pad='mod(iw,2)+iw':'mod(ih,2)+ih'