I've been trying to get this to work on and off for the past month and am very frustrated, so I'm hoping someone on here could help me. What I'm trying to do is very simple but I struggle with ffmpeg. I basically just want to take a folder of pictures, each of which have different sizes and some may be horizontal or vertical orientation, and put them into a video slideshow where they show for maybe 5-10 seconds each. No matter what I try, it always winds up stretching out the pictures to be out of the ratio and they just look funny. I noticed Windows 10 Photo program does this perfectly, but I want a programmatic approach and I don't think it has a commandline feature. Can someone help me tweak this ffmpeg commandline to work the way I need it to? Desired video output would be 1920x1080 in this case. Thanks!
ffmpeg -r 1/5 -start_number 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" "F:\Destination_Output\Test_Output.mp4"
Use a combination of scale and pad to generate proportionally resized images centered onto a 1080p frame.
Use
ffmpeg -framerate 1/5 -start_number 0 -reinit_filter 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -vf "scale=1920:1080:force_original_aspect_ratio=decrease:eval=frame,pad=1920:1080:-1:-1:eval=frame" -r 25 -c:v libx264 "F:\Destination_Output\Test_Output.mp4"
Related
for example, this command line:
ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov -vf "scale=w=416:h=234:force_original_aspect_ratio=decrease" -an -f rawvideo -pix_fmt yuv420p -r 15 -
works fine except if the source video was 360x240, output will be 351x234. which kinda sucks as yuv420p video with odd sizes is difficult to handle due to the way colour data is stored.
is there a way i could force ffmpeg to give nearest possible even values?
If you're resizing use just one of the dimensions with an absolute value, example:
Change:
-vf "scale=w=416:h=234:force_original_aspect_ratio=decrease"
To:
-vf "scale=w=416:h=-2"
Should scale to a width of 416 and scale the height appropriately so the aspect ratio keeps the same.
-2 = scale using mod 2
-4 = scale using mod 4 etc....
you can achieve that by using force_divisible_by=2 in your filter like this :
-vf scale=w=852:h=480:force_original_aspect_ratio=decrease:force_divisible_by=2
i know the question is old but hope this help someone.
I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.
First of all: forgive me for maybe asking a stupid or somewhat uninformed question. I'm totally new to post processing video, stabilization, etc..
I'm shooting 1920x1080 compressed movie files with my Canon 5D2, and afterwards crop then to cinematic 1920x800 (2.4:1). (With Magic Lantern I use an overlay bitmap when shooting. And yes, I know that with magic lantern I can shoot RAW, but my cards as well as computer are not fast enough to deal with that much data.)
Before doing any production, I convert the big .MOV files to smaller ones, simultaneously stabilizing the video a bit, and cropping it to 1920x800. I do this with ffmpeg roughly as follows:
ffmpeg -i f.MOV -vf vidstabdetect -f null -
ffmpeg -i f.MOV -c:v libx264 -profile:v high -crf 18 -vf "vidstabtransform, crop=in_w:in_h-280" -c:a aac -strict experimental f2.mp4
However, the fact that a great deal of the vertical resolution is being cropped is not being used to be able to handle the stabilizing transforms better. Often, the image is stretched/skewed vertically, when this is not really needed given the crop used.
Is it possible in any way to use the crop befenificially in the stabilizing transforms?
An example is the frame below. Here, I would rather have that the image is not stretched vertically at all, and just get away with a slight static zoom (crop), because the horizontal black border is the only problem in this frame.
Better is use this command:
# to get the video fps
fps="$(ffmpeg -i $VarIN 2>&1 | sed -n 's/.*, \(.*\) fp.*/\1/p')"
transcode -J stabilize -i vidIn.mp4
transcode -J transform -i vidIn.mp4 -f $fps -y raw -o vidOut.avi
I'm trying to create video from image but it's fit to video size (hd). How to keep aspect ratio of my image BUT get 1280 x 720 video?
Here is current result (image is 3264 x 2448 px, video 1280 x 720 px):
Here is my current command:
ffmpeg -loop 1 -i IMAGE_PATH -t 3 -s hd720 -c:v mpeg4 -pix_fmt yuv420p -preset ultrafast RESULT_PATH
Should I divide my task to two operations (generate image with black stripes then generate video)? Could you help to modify command to get desired result?
It is better to use aspect though you specify the s in the command.
-aspect 3264/2448
And also try pad to get the black bars around the output video without stretching the video to fit the screen size. This question is about that.
Hope this will help you!
Lie a few others I'm trying to watermark a video with an image (see FFmpeg - How to scale a video then apply a watermark?). Oh, and I'm transcoding the format too.
The difference is I want my image to be the exact same size as the video. I need to do this as a filter chain because each video is a different size and I'm using a single watermark image. Furthermore, the server it has to run on has an older version of ffmpeg so it doesn't recognise the -filter_complex option.
So far, I've gotten as far as
ffmpeg -y -i input_video.mov -vcodec libx264 -vf "movie=watermark.png [watermark]; [watermark] scale=main_w:main_h [scaled_watermark]; [in][scaled_watermark] overlay=0:0 [out]" output_video.m4v
The problem is that the main_w and main_h constants only seem to be recognised in the overlay filter graph and not in the scale filter graph.
So how do I find out the width and height of input_video.mov so that I can scale the watermark correctly?