I am trying to encode video with FFMPEG into H.265, but I have a problem with a weird stretching. Input video is 1920x1080 and output has the same resolution, but when I compare both images on same timestamp, encoded video seems to be stretched by few pixels (it is visibly wider on both sizes despite the fact resolution is same). It seems that this stretching introduces ugly bluriness in whole video. It seems like FFMPEG crop few pixels from left and right (probably black pixels at the edge of video) and stretches content to fill those missing pixels and preserve same resolution.
I did not find any way how to disable this behavior. I tried to change encoder from x265 to x264 to see if that is the problem, but result was still stretched.
I used this command line parameters:
ffmpeg -i input.mkv -c:v libx265 -preset medium -crf 23 -t 30 output.mp4
-t 30 is there to test result visual quality on small sample of length 30 seconds.
Does anyone have any idea why this happens and how to fix it? Most visual quality is lost because of this deformation and not because of recompression, which I proved by encoding with -crf 0, which is basically lossless and result was still blurred.
EDIT: Link to full console output: https://pastebin.com/gpMD5Qec
Related
I have a generic process whose purpose is to take a video at any aspect ratio and generate a PNG from one of its frames. This frame should:
Be as large as possible, but no larger than 720x405 (16:9)
Maintain the aspect ratio of the video
Have no letterboxing
ffmpeg -y -nostats -ss 10 -i ./video.mp4 -max_muxing_queue_size 6400 -an -frames:v 1 -r 24/1 -vf "scale=w=720:h=405:force_original_aspect_ratio=decrease" -f image2 ./frame.png
When I give this command a video with a sample_aspect_ratio (SAR) of 4:3 and a display_aspect_ratio (DAR) of 16:9, I end up with a 540x405 (4:3) PNG where the image is horizontally compressed. Presumably force_original_aspect_ratio is looking at sample_aspect_ratio rather than display_aspect_ratio.
How do I ensure that the generated image maintains the same aspect ratio as the video (as displayed to the user)?
Insert a scale filter to convert frames to square pixels.
-vf "scale=iw*sar:ih,setsar=1,scale=w=720:h=405:force_original_aspect_ratio=decrease"
I found some posts explaining how to turn any video horizontal by adding blurred borders using FFMpeg, but I want to convert videos to vertical 1080x1920. I don't want it to enlarge the video, nor crop if a dimension is bigger than either 1080 or 1920 dimension. Instead, I want it to shrink the video until it fits fully inside 1080x1920, and then I want it to add blurred borders to the empty areas.
This is the snippet I found, but when I tried reversing the numbers, it actually cropped the video.
ffmpeg -I input.mp4 -lavfi "[0:v]scale=1920*2:1080*2,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[0:v]scale=-1:1080[ov];[bg][ov]overlay=(W-w)/2:(H-h)/2,crop=w=1920:h=1080" output.mp4
Simple method:
ffmpeg -i input.mp4 -filter_complex "[0:v]boxblur=40,scale=1080x1920,setsar=1[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=y=(H-h)/2" -c:a copy output.mp4
"Simple" because it forces the background to 1080x1920 and ignores aspect ratio. So the background it will looked stretched, but it is blurred so much nobody will care or notice.
We have some videos that have different scale and aspect ratio and we'd like to convert them to a fix 640x480 size (4/3 ar letterbox padding if necessary).
Two sizes are occurs very often: 853 × 480, 1280 × 720.
I made some research and tries before write this question but didn't get the expected result.
For example:
ffmpeg -i video.mp4 -vf "scale=640:480,pad=640:480:(ow-iw)/2:(oh-ih)/2,setdar=4/3" -c:a copy output.mp4
setdar=4/3 seems to required because if I omitted the result remain the original aspect ratio.
Are there any solution for different size conversion?
The generic filterchain for fitting a video in a WxH canvas is
"scale=iw*sar:ih,scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:-1:-1"
The first scale filter makes sure the video is not kept anamorphic. If you know the video is square-pixels, you can skip it. The 2nd filter fits the video in a canvas of 640x480 using the force_original_aspect_ratio option.
First of all: forgive me for maybe asking a stupid or somewhat uninformed question. I'm totally new to post processing video, stabilization, etc..
I'm shooting 1920x1080 compressed movie files with my Canon 5D2, and afterwards crop then to cinematic 1920x800 (2.4:1). (With Magic Lantern I use an overlay bitmap when shooting. And yes, I know that with magic lantern I can shoot RAW, but my cards as well as computer are not fast enough to deal with that much data.)
Before doing any production, I convert the big .MOV files to smaller ones, simultaneously stabilizing the video a bit, and cropping it to 1920x800. I do this with ffmpeg roughly as follows:
ffmpeg -i f.MOV -vf vidstabdetect -f null -
ffmpeg -i f.MOV -c:v libx264 -profile:v high -crf 18 -vf "vidstabtransform, crop=in_w:in_h-280" -c:a aac -strict experimental f2.mp4
However, the fact that a great deal of the vertical resolution is being cropped is not being used to be able to handle the stabilizing transforms better. Often, the image is stretched/skewed vertically, when this is not really needed given the crop used.
Is it possible in any way to use the crop befenificially in the stabilizing transforms?
An example is the frame below. Here, I would rather have that the image is not stretched vertically at all, and just get away with a slight static zoom (crop), because the horizontal black border is the only problem in this frame.
Better is use this command:
# to get the video fps
fps="$(ffmpeg -i $VarIN 2>&1 | sed -n 's/.*, \(.*\) fp.*/\1/p')"
transcode -J stabilize -i vidIn.mp4
transcode -J transform -i vidIn.mp4 -f $fps -y raw -o vidOut.avi
I'm trying to create video from image but it's fit to video size (hd). How to keep aspect ratio of my image BUT get 1280 x 720 video?
Here is current result (image is 3264 x 2448 px, video 1280 x 720 px):
Here is my current command:
ffmpeg -loop 1 -i IMAGE_PATH -t 3 -s hd720 -c:v mpeg4 -pix_fmt yuv420p -preset ultrafast RESULT_PATH
Should I divide my task to two operations (generate image with black stripes then generate video)? Could you help to modify command to get desired result?
It is better to use aspect though you specify the s in the command.
-aspect 3264/2448
And also try pad to get the black bars around the output video without stretching the video to fit the screen size. This question is about that.
Hope this will help you!