I have a bunch of videos in a folder at different fps. I need to calculate the number of frames that can be extracted at 0.5 fps.
Its kind of a simple mathematical problem, I need help on the formula.
Thanks
if the duration of video is t and I am assuming that's when the ffmpeg will terminate then it should ideally return you t/2 frames.
Related
I have a video at 24 frames per second. I understand that means that 24 images will appear in 1 second in a row? Is that wrong? If that is true, can each image be deleted and edited in 24 images that appear on that 1 second? and can ffmpeg do that? This is just an idea I suddenly thought of to be able to interfere more deeply with an existing video. Anyone think like that?
Yes, 24 frames per second means there are 24 images (each image is a frame) in each second. ffmpeg can extract that sequence of frames. ffmpeg can assemble a sequence of frames to create a video. You are free to edit/photoshop/replace the frames in between those two steps; ffmpeg can't prevent that and won't be bothered by it (so long as the image dimensions remain the same as all the other frames).
When recording video, we know that it has for instance 60 fps. But do we also know, what the video offset is? We do know that the time difference between frames is 1/60 s. But when did it start as an absolute value? The problem comes into play when trying to sync two videos of the same scenery, for instance stereo with different cameras that were not frame-locked. My idea is to interpolate two - audio synced - 60 fps videos to for instance 180 fps to then change them to 60 fps again to hopefully have matching frames. This does not work, according to my test. A better solution would be to know the time difference between frames to interpolate based on difference, for instance using a deep learning algorithm.
How can ffmpeg help me here?
I am programatically extracting multiple audio clips from single video files using ffmpeg.
My input data (start and end points) are specified in frames rather than seconds, and the audio clip will be used by a frame-centric user (an animator). So, I'd prefer to work in frames throughout.
In addition, the framerate is 30fps, which means I'd be working in steps of 0.033333 seconds, and I'm not sure it's reasonable to expect ffmpeg to trim correctly given such values.
Is it possible to specify a frame number instead of an ffmpeg time duration for start point (-ss) and duration (-t)? Or are there frame-centric ffmpeg commands that I've missed?
Audio frame or sample numbers don't correspond to video frame numbers, and I don't see a way to specify audio trim points by referencing video frame indices. Nevertheless, see this answer for more details.
Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.
When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg
HI, i'm trying to create a repeating dtmf tone so i can play it with AVAudioPlayer. Currently
when i loop it in some audio editing software such as audacity there is always a glitch or change in tone at the point where it repeats. Is there some particular length of time i need to create it to avoid this. I initally created a one second dtmf tone in audacity but this does not repeat smoothly.
It can't repeat smoothly, as much as you try.
You should calculate period of both frequencies, and calculate loop length accordingly.
For example, if you combine 770 and 1336 hz, your smallest sample is of 1000/770= and 1000/1336.
Then, use your sample rate here. Let it be 44100. Your samples would be of length:
1000*44100/770 = 57272 samples
and
1000*44100/1336 = 33009 samples
Least common multiple for that lengths is 1890491448, and in terms of seconds, that would be 42868 seconds.
So, creating a loop and playing it isn't really feasible.
You can either: create sine wave on the fly and mix it, or create sine wave samples for base frequencies, and then mix them or play them simultaneously.