FFMPEG Increment filename past 1000 and reset from 1 - ffmpeg

From the documentation:
ffmpeg -i video.webm image-%03d.png
This will extract 25 images per second from the file video.webm and save them as image-000.png, image-001.png, image-002.png up to image-999.png. If there are more than 1000 frames then the last image will be overwritten with the remaining frames leaving only the last frame.
Is there any way to increment this number past 1000, and can I also have this restart from 1 so that we're not just overwriting the last frame?
I have a script that analyzes these images as they come in so I use locally stored images as a buffer/queue. It's also useful for me to have more images stored so I can go back and debug anything, so being able to do the above would be quite helpful for me.

ffmpeg -i video.webm image-%04d.png
Will output image-0001.png, etc, allowing you to go beyond 999.

Related

How to select range of an image sequence with sequential suffixes in ffmpeg

I have 100 images (001.png ... 100.png) in my img folder and I want to create 2 gif files with different frame ranges for specific uses from there.
The first case uses all 100 images. I use the code below and it works fine.
ffmpeg -f image2 -framerate 30 -i %003d.png -vf scale=-2:480 myAnim.gif
But in the second case, we want to use only 50 images from 20 to 70. What code should I use to select this specific range?
For the starting image, see the documentation, specifically:
start_number
Set the index of the file matched by the image file pattern to start
to read from. Default value is 0.
For the end, you need to use the -t duration input option to control it. If you have 50 images at 30 fps, it'll take 1-2/3 seconds, so put down -t 1.67 (round up to be on the safe side).

ffmpeg timing individual frames of an image sequence

I am having an image sequence input of webp-s concatenated (for various reasons) in a single file. I have a full control over the single file format and can potentially reformat it as a container (IVF etc.) if a proper exists.
I would like ffmpeg to consume this input and time properly each individual frame (consider first displayed for 5 seconds, next 3 seconds, 7, 12 etc.) and output a video (mp4).
My current approach is using image2pipe or webp_pipe followed by a list of loop filters, but I am curious if there are any solid alternatives potentially a simple format/container I could use in order to reduce or completely avoid ffmpeg filter instructions as there might be hundreds or more in total.
ffmpeg -filter_complex "...movie=input.webps:f=webp_pipe,loop=10:1:20,loop=10:1:10..." -y out.mp4
I am aware of concat demuxer but having a separate file for each input image is not an option in my case.
I have tried IVF format which works ok for vp8 frames, but doesnt seem to accept webp. An alternative would be welcomed, but way too many exists for me to study each single one and help would be appreciated.

FFMpeg dropping frames at end of file using image2 demux with multiple encoders, any solutions?

Alright, real simple here. I'm rendering some fractal flames I've created over the years. Which makes the math on all of this really simple.. lol.
I'm trying to generate a 5 second video at 60fps that when played continuously makes a perfect loop.
So I sequence and render exactly 300 frames numbered 000.png through 299.png for one loop. I then send this into FFMpeg with the following command:
ffmpeg -f image2 -framerate 60 -start_number 0 -i '%03d.png' -r 60
-crf 10 output.webm
No matter what, it kills the last 12-18 frames depending on the run and creates a video that players recognize as 4 seconds only.
Here is a snippet of the processing output (Take note that 300 frames at 60fps no matter what you do comes out at 04.66 seconds - but it does claim there are exactly 5 seconds on the input side)
I have tried replacing -crf setting with just -quality good, I have tried moving around where I state the framerate. I have tried removing the -r from the output and putting it in there. I have tried building out this call to be as specific as possible such as the strictly specifying the encoder and options. Oh I have tried other encoders and get the same result. I have even tried -hwaccell using NVEC and CUVID respectively.
Nothing I do works..
Any thoughts here? Maybe alternatives to FFMpeg? Maybe difference versions of FFMpeg? I don't know what I should do next and thought I would ask.
Diagnostic output on a finished file for reference this one actually got close with 294 frames and a 4.9 second runtime it is much higher res though:

How to convert images to a video by the time they have been taken/saved and not by name?

I have a Livestream that takes pictures every one or two seconds. Then I have a folder in which the pictures are being saved.
Usually I would just simply take ffmpeg -start_number -i IMG_%d.JPG video.webm (I need the end format in .webm). I want to use the saved pictures to form a time-lapse to put up next to the stream. So people can see the last 24h/ 7d and 90d. So I wondered if I could code that all the pics of that time period are used and only them.
I later want it to happen automatic, so i dont have to write the code new every day.
Thanks for your help

FFmpeg segmentation and inaccurate/wrong framerate

I use ffmpeg to save on files an RTSP stream at 15 fps. The command is similar to this (I've simplified it):
ffmpeg -y -i rtsp://IP/media.amp -c copy -r 15 -f segment -segment_time 60 -reset_timestamps 1 -segment_atclocktime 1 -strftime 1 outputFile%Y-%m-%d_%H-%M-%S.mp4
It basically creates 1 minute long files from the stream, but the problem is that the framerate of every segmented file is NEVER 15fps.
The values that I get are something like this.
14.99874
15.00031
This is a huge problem for me because I need to merge these files with other 15fps videos and the result is not good. The merged file is unstable, the image crashes and sometimes even VLC crashes if I randomly click on the time bar.
If I just merge the stream files all is well, when I try it to mix it with something else, there is nothing I can do to have a video file that is watchable and stable.
Is this normal? What can I do to have segments with a fixed 15fps without re-encoding?
Thanks in advance.
As Mulvya pointed out, ffmpeg truncates the last frame.
There are two ways to solve this:
1) Save the files to another container other than mp4, it can be TS
2) Remove the last frame of the video also works but you have to use a filter which means re-encoding which can be long and heavy on the cpu/ram

Resources