I have N input animation frames as images in a folder and I want to create interpolated inbetween frames to create a smoother animation of length N * M, i.e. for every input frame I want to create M output frames that gradually morph to the next frame, e.g. with the minterpolate filter.
In other words, I want to increase the FPS M times, but I am not working with time as I am not working with any video formats, both input and output are image sequences stored as image files.
I was trying to combine the -r and FPS options, but without success as I don't know how they work together. For example:
I have 12 input frames.
I want to use the minterpolate filter to achieve 120 frames.
I use the command ffmpeg -i frames/f%04d.png -vf "fps=10, minterpolate" -r 100 interpolated_frames/f%04d.png
The result I get is 31 output frames.
Is there a specific combination of -r and FPS I should use? Or is there another way I can achieve what I need?
Thank you!
FFmpeg assigns a framerate of 25 to formats which don't have an inherent frame rate, like image sequences.
The image sequence demuxer has an option to set a framerate. And the minterpolate filter has an option for target fps.
ffmpeg -framerate 12 -i frames/f%04d.png -vf "minterpolate=fps=120" interpolated_frames/f%04d.png
Related
Exporting all the frames in a single Nx1 tile can be done like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=100x1" output.jpg
The problem is that I don't know up front how many frames are there going to be, so I specify a much higher number than expected (based on the movie length and fps). Ideally I would like something like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=Nx1" output.jpg
Where Nx1 would tell ffmpeg to create an image as wide as the number of exported frames.
I know there is a showinfo filter that might come as handy, but I was never able to integrate it so that it's output is used as input for tile.
Also, I tried pre-calculating the number of frames based on the movie duration and fps but this was never very accurate. Even for exactly 3.000s movie and 3fps it was producing 8 frames.
I use ffmpeg with complex filtering. Input are different sets of FULLHD surveilance camera videos each 10 to 15 seconds long. Set size (number of videos per set) varies. To remove unchanged frames I apply mpdecimate. To avoid being triggered by moving bushes but still keep objects I want to remain, I apply a complex filter:
split the video (the original and a dummy to detect motion/stills)
scale the dummy down (so the 8x8-block-metric of mpdecimate matches the size of moving objects I want to keep)
add white boxes to dummy to mask unintendedly moving objects
apply mpdecimate to dummy to remove non-changing frames
scale dummy back to original size
overlay the remaining frames of dummy with matching frames of original
All this works fine if the number of input videos is small (less than 100). The memory consupmtion of the ffmpeg process varies somewhere between 2GiB and 5GiB.
If the number of input files gets larger (say 200), the memory consumption suddenly jumps to insane numbers until memory (32GiB plus 33GiB swap) runs out and ffmpeg gets killed. I can not predict if and why this happens. I have one example, where a set of 340 videos worked using 6GiB. Any other set above 100 videos I tried eats all RAM in under two minutes and dies.
There is no particular error message from ffmpeg.
dmesg says:
Out of memory: Kill process 29173 (ffmpeg)
Killed process 29173 (ffmpeg) total-vm:66707800kB
My ffmpeg command:
ffmpeg -f concat -safe 0 -i vidlist -vf 'split=2[full][masked];[masked]scale=w=iw/4:h=ih/4,drawbox=w=51:h=153:x=101:y=0:t=fill:c=white,drawbox=w=74:h=67:x=86:y=49:t=fill:c=white,drawbox=w=51:h=149:x=258:y=0:t=fill:c=white,drawbox=w=13:h=20:x=214:y=103:t=fill:c=white,drawbox=w=29:h=54:x=429:y=40:t=fill:c=white,drawbox=w=35:h=49:x=360:y=111:t=fill:c=white,drawbox=w=26:h=54:x=304:y=92:t=fill:c=white,drawbox=w=48:h=27:x=356:y=105:t=fill:c=white,drawbox=w=30:h=27:x=188:y=124:t=fill:c=white,drawbox=w=50:h=54:x=371:y=7:t=fill:c=white,drawbox=w=18:h=38:x=248:y=107:t=fill:c=white,drawbox=w=21:h=51:x=242:y=33:t=fill:c=white,mpdecimate=hi=64*80:lo=64*40:frac=0.001,scale=w=iw*4:h=ih*4[deduped];[deduped][full]overlay=shortest=1,setpts=N/(15*TB),mpdecimate=hi=64*80:lo=64*50:frac=0.001,setpts=N/(15*TB)' -r 15 -c:v libx265 -preset slower -crf 37 -pix_fmt yuv420p -an result.mkv
ffmpeg version 4.1.6
Debian 4.19.171-2
I hope that my filter can be tuned in some way that achieves the same result but doesn't eat RAM that much - but I have no clue how. Within reasonable limits, I wouldn't mind if processing time suffers. Any hints appreciated.
It seems the memory issue can be avoided by removing the split filter. Instead of spliting one input into two streams (that ffmpeg has to store in memory) the same input can be loaded twice.
So instead of using "full" and "dummy" as below
ffmpeg -i source -vf 'split=2[full][dummy];...;[dummy][full]overlay...
one would use "0:v" and "1:v" as in
ffmpeg -i source -i scource -filter_complex '.....;[0:v][1:v]overlay...
I get this to work with input videos, but so far I fail to do this with the concat demuxer as input.
Any hints very welcome.
I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).
I am programatically extracting multiple audio clips from single video files using ffmpeg.
My input data (start and end points) are specified in frames rather than seconds, and the audio clip will be used by a frame-centric user (an animator). So, I'd prefer to work in frames throughout.
In addition, the framerate is 30fps, which means I'd be working in steps of 0.033333 seconds, and I'm not sure it's reasonable to expect ffmpeg to trim correctly given such values.
Is it possible to specify a frame number instead of an ffmpeg time duration for start point (-ss) and duration (-t)? Or are there frame-centric ffmpeg commands that I've missed?
Audio frame or sample numbers don't correspond to video frame numbers, and I don't see a way to specify audio trim points by referencing video frame indices. Nevertheless, see this answer for more details.
I have a question about ffmpeg usage. Every time when I trying to convert video files into
some different format, output file getting static keyframe sequence.
What I mean is that keyframes appear at the distance of 12 frames. I know that its controllerd by parameter -g that you can change to any other number.
ffmpeg -i 1.avi -vcodec mpeg4 -b 2000000 out.avi
I believe there should be some way to make keyframes appear on uneven intervals. These interval should be calculated by codec, and it should be based on image changes in the video file. So keyframes should be inserted only when they needed, but not consistently after N frames.
Can somebody please explain to me how this "smart" encoding can be done with ffmpeg ?
Thank you
SOLUTION: ok what I'ev been looking for has very simple solution. If you set -g to zero, ffmpeg will choose keyframes based on the video shots and bitrate
If you set -g to zero, ffmpeg will choose keyframes based on the video shots and bitrate