Exporting all the frames in a single Nx1 tile can be done like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=100x1" output.jpg
The problem is that I don't know up front how many frames are there going to be, so I specify a much higher number than expected (based on the movie length and fps). Ideally I would like something like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=Nx1" output.jpg
Where Nx1 would tell ffmpeg to create an image as wide as the number of exported frames.
I know there is a showinfo filter that might come as handy, but I was never able to integrate it so that it's output is used as input for tile.
Also, I tried pre-calculating the number of frames based on the movie duration and fps but this was never very accurate. Even for exactly 3.000s movie and 3fps it was producing 8 frames.
Related
I have 2 videos, same video and audio quality but different length.
Lets say the video resolution is 1920x1080 pixel
I want to merge both videos side by side, considering the longest length.
What I found so far, but it is not what I need :(
On internet I found many examples which gives me the outcome of 3840x1080 pixel.
What I want:
Outcome: Video with 1920x1080 pixel
use from video1: left part, such Pixel 1 to 960
use from video2: right part, such Pixel 961 to 1920
audio is merged, i.e. I can hear both audios simultaneously as available
What I want - optional:
Between 2 videos, there is a visible split like |
Is there a single ffmpeg command line I can use?
Many Thanks,
BM
using crop and stack:
ffmpeg -i test10.mkv -i test06.mkv -filter_complex "
color=red:2x1080:24000/1001:1[c];
[0]crop=iw/2-1:ih:0:0[v0];
[1]crop=iw/2+1:ih:iw-ow:0[v1];
[v0][c][v1]xstack=inputs=3:grid=3x1;
[0][1]amix
" output.mkv
or
[v0][c][v1]hstack=inputs=3;
if inputs have different framerates etc, you can try to use overlay:
ffmpeg -i test10.mkv -i test06.mkv -filter_complex "
[0]crop=iw/2-1:ih:0:0[v0];
[1]crop=iw/2+1:ih:iw-ow:0[v1];
color=red:1920x1080[bg];
[bg][v0]overlay=shortest=1[b0];
[b0][v1]overlay=x=W-w;
[0][1]amix
" output.mkv
I've been trying to get this to work on and off for the past month and am very frustrated, so I'm hoping someone on here could help me. What I'm trying to do is very simple but I struggle with ffmpeg. I basically just want to take a folder of pictures, each of which have different sizes and some may be horizontal or vertical orientation, and put them into a video slideshow where they show for maybe 5-10 seconds each. No matter what I try, it always winds up stretching out the pictures to be out of the ratio and they just look funny. I noticed Windows 10 Photo program does this perfectly, but I want a programmatic approach and I don't think it has a commandline feature. Can someone help me tweak this ffmpeg commandline to work the way I need it to? Desired video output would be 1920x1080 in this case. Thanks!
ffmpeg -r 1/5 -start_number 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" "F:\Destination_Output\Test_Output.mp4"
Use a combination of scale and pad to generate proportionally resized images centered onto a 1080p frame.
Use
ffmpeg -framerate 1/5 -start_number 0 -reinit_filter 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -vf "scale=1920:1080:force_original_aspect_ratio=decrease:eval=frame,pad=1920:1080:-1:-1:eval=frame" -r 25 -c:v libx264 "F:\Destination_Output\Test_Output.mp4"
I have N input animation frames as images in a folder and I want to create interpolated inbetween frames to create a smoother animation of length N * M, i.e. for every input frame I want to create M output frames that gradually morph to the next frame, e.g. with the minterpolate filter.
In other words, I want to increase the FPS M times, but I am not working with time as I am not working with any video formats, both input and output are image sequences stored as image files.
I was trying to combine the -r and FPS options, but without success as I don't know how they work together. For example:
I have 12 input frames.
I want to use the minterpolate filter to achieve 120 frames.
I use the command ffmpeg -i frames/f%04d.png -vf "fps=10, minterpolate" -r 100 interpolated_frames/f%04d.png
The result I get is 31 output frames.
Is there a specific combination of -r and FPS I should use? Or is there another way I can achieve what I need?
Thank you!
FFmpeg assigns a framerate of 25 to formats which don't have an inherent frame rate, like image sequences.
The image sequence demuxer has an option to set a framerate. And the minterpolate filter has an option for target fps.
ffmpeg -framerate 12 -i frames/f%04d.png -vf "minterpolate=fps=120" interpolated_frames/f%04d.png
We have some videos that have different scale and aspect ratio and we'd like to convert them to a fix 640x480 size (4/3 ar letterbox padding if necessary).
Two sizes are occurs very often: 853 × 480, 1280 × 720.
I made some research and tries before write this question but didn't get the expected result.
For example:
ffmpeg -i video.mp4 -vf "scale=640:480,pad=640:480:(ow-iw)/2:(oh-ih)/2,setdar=4/3" -c:a copy output.mp4
setdar=4/3 seems to required because if I omitted the result remain the original aspect ratio.
Are there any solution for different size conversion?
The generic filterchain for fitting a video in a WxH canvas is
"scale=iw*sar:ih,scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:-1:-1"
The first scale filter makes sure the video is not kept anamorphic. If you know the video is square-pixels, you can skip it. The 2nd filter fits the video in a canvas of 640x480 using the force_original_aspect_ratio option.
I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.