First of all: forgive me for maybe asking a stupid or somewhat uninformed question. I'm totally new to post processing video, stabilization, etc..
I'm shooting 1920x1080 compressed movie files with my Canon 5D2, and afterwards crop then to cinematic 1920x800 (2.4:1). (With Magic Lantern I use an overlay bitmap when shooting. And yes, I know that with magic lantern I can shoot RAW, but my cards as well as computer are not fast enough to deal with that much data.)
Before doing any production, I convert the big .MOV files to smaller ones, simultaneously stabilizing the video a bit, and cropping it to 1920x800. I do this with ffmpeg roughly as follows:
ffmpeg -i f.MOV -vf vidstabdetect -f null -
ffmpeg -i f.MOV -c:v libx264 -profile:v high -crf 18 -vf "vidstabtransform, crop=in_w:in_h-280" -c:a aac -strict experimental f2.mp4
However, the fact that a great deal of the vertical resolution is being cropped is not being used to be able to handle the stabilizing transforms better. Often, the image is stretched/skewed vertically, when this is not really needed given the crop used.
Is it possible in any way to use the crop befenificially in the stabilizing transforms?
An example is the frame below. Here, I would rather have that the image is not stretched vertically at all, and just get away with a slight static zoom (crop), because the horizontal black border is the only problem in this frame.
Better is use this command:
# to get the video fps
fps="$(ffmpeg -i $VarIN 2>&1 | sed -n 's/.*, \(.*\) fp.*/\1/p')"
transcode -J stabilize -i vidIn.mp4
transcode -J transform -i vidIn.mp4 -f $fps -y raw -o vidOut.avi
Related
I've been trying to get this to work on and off for the past month and am very frustrated, so I'm hoping someone on here could help me. What I'm trying to do is very simple but I struggle with ffmpeg. I basically just want to take a folder of pictures, each of which have different sizes and some may be horizontal or vertical orientation, and put them into a video slideshow where they show for maybe 5-10 seconds each. No matter what I try, it always winds up stretching out the pictures to be out of the ratio and they just look funny. I noticed Windows 10 Photo program does this perfectly, but I want a programmatic approach and I don't think it has a commandline feature. Can someone help me tweak this ffmpeg commandline to work the way I need it to? Desired video output would be 1920x1080 in this case. Thanks!
ffmpeg -r 1/5 -start_number 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" "F:\Destination_Output\Test_Output.mp4"
Use a combination of scale and pad to generate proportionally resized images centered onto a 1080p frame.
Use
ffmpeg -framerate 1/5 -start_number 0 -reinit_filter 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -vf "scale=1920:1080:force_original_aspect_ratio=decrease:eval=frame,pad=1920:1080:-1:-1:eval=frame" -r 25 -c:v libx264 "F:\Destination_Output\Test_Output.mp4"
I am trying to create a GIF from a bunch of JPEG images with different sizes while preserving the aspect ratios for each one of them. What I am trying to achieve is let's say we have a rectangle with 640x480 and the image should be centered in it and expanded to fill the dimensions as much as possible. The resulting gif should be as small as possible in dimensions and all the blank space should be in solid color.
ffmpeg -f image2 -i img_%d.jpg -vf scale=640x480:force_original_aspect_ratio=decrease output.gif
force_original_aspect_ratio=increase didn't help either.
Actually I tried lot of different options, but the result is pretty much the same. The options are applied on the first image of the sequence only, and all the other images are resized to the dimensions of the first one without preserving their own aspect ratio.
I just want to know is that doable with ffmpeg or should I look into custom image manipulation before the gif assembling?
Use
ffmpeg -i img_%d.jpg -vf scale='if(gt(a,640/480),640,-1)':'if(gt(a,640/480),-1,480)':eval=frame,pad=640:480:(ow-iw)/2:(oh-ih)/2 output.gif
You may want to use the palettegen and paletteuse filters for optimizing the GIF creation.
1
ffmpeg -i img_%d.jpg -vf scale='if(gt(a,640/480),640,-1)':'if(gt(a,640/480),-1,480)':eval=frame,pad=640:480:(ow-iw)/2:(oh-ih)/2,palettegen palette.png
2
ffmpeg -i img_%d.jpg -i palette.png -filter_complex "[0]scale='if(gt(a,640/480),640,-1)':'if(gt(a,640/480),-1,480)':eval=frame,pad=640:480:(ow-iw)/2:(oh-ih)/2[seq];[seq][1]paletteuse" output.gif
I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.
I'm trying to play some videos (webm mostly) on some very-low performance hardware. The hardware can barely handle FullHD output.
Since the devices in question are online via 3G modem only, there is some weight on the video size as well. However right now, the playing performance is definitely the more important part.
So, here's the question: Are there any options for avconv to improve playback performance? Or should I simply use another codec instead?
Right now, the command used is something like the following:
avconv \
-i $input_file \
-y \
-vf scale=$scale \
-an \
$output_file
You would want to use ffmpeg instead of avconv (ffmpeg is more active and reliable - my opinion):
Compile ffmpeg with libvpx support (WebM): guide
I would suggest you use CBR encoding
Set --profile to 3: guide, read about some more options if you want
Generally you would want to lower frame resolution and frame per second as much as acceptable for your project requirement and throw the appropriate bitrate for it.
There is an approach that you can try, that is to shrink the video twice in hight while leaving the width alone:
$ avconv -i 01.webm -vf 'scale=w=iw:h=ih/2' -c:v libtheora -c:a copy 01.ogv
for me has produced a file 84% the size of
$ avconv -i 01.webm -c:v libtheora -c:a copy 01.ogv
This way is much better than scalink in width, because it does not damage the text that may appear on the screen quite as much (human brain can for whatever reason deal with vertical distortion easier than with horizontal one).
You can also apply the denoise filter hqdn3d, which will make the filesize smaller, but will not damage the quality of the video.
The load on the processor of the playing machine can sometimes be more difficult to predict when one goes from video to video; but there is a difference in codecs. I've not compared them much, so can't offer real assistance.
I already have found out how to scale the thumbnail to stay within specified bounding dimensions while maintaining aspect ratio. For example, to get the frame shown at 6 seconds into the input.mp4 video file, and scale it to fit into 96x60 (16:10 aspect ratio):
ffmpeg -y -i input.mp4 -ss 6 -vframes 1 -vf scale="'if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)'" output.png
This is fine, it works.
Next, I would like to do the same, but if the video's aspect ratio is not exactly 16:10, then I would like to force the output image to have an aspect ratio of 16:10 by taking the above transformation, and filling or padding the space with white. That is, I want the output to be as if I took, say, a 96x48 image, and laid it over a 96x60 white background, resulting in white bars above and below the 96x48 image.
Ideally, I do not want to resort to using another tool or library, such as ImageMagick. It would be best if ffmpeg could do this on its own.
Here's what I went with. For the -vf argument:
-vf "scale='if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)', pad=w=96:h=60:x=(ow-iw)/2:y=(oh-ih)/2:color=white"
This applies two filters in sequence, separated by a comma.
target_H = 2436
target_W = 1124
ffmpeg -i 1.mp4 -ss 1 -vframes 1 -vf "scale=min(iw*2436/ih\,1124):min(2436\,ih*1124/iw),pad=1124:2436:(1124-iw)/2:(2436-ih)/2:green" output.png