ffmpeg timing individual frames of an image sequence - ffmpeg

I am having an image sequence input of webp-s concatenated (for various reasons) in a single file. I have a full control over the single file format and can potentially reformat it as a container (IVF etc.) if a proper exists.
I would like ffmpeg to consume this input and time properly each individual frame (consider first displayed for 5 seconds, next 3 seconds, 7, 12 etc.) and output a video (mp4).
My current approach is using image2pipe or webp_pipe followed by a list of loop filters, but I am curious if there are any solid alternatives potentially a simple format/container I could use in order to reduce or completely avoid ffmpeg filter instructions as there might be hundreds or more in total.
ffmpeg -filter_complex "...movie=input.webps:f=webp_pipe,loop=10:1:20,loop=10:1:10..." -y out.mp4
I am aware of concat demuxer but having a separate file for each input image is not an option in my case.
I have tried IVF format which works ok for vp8 frames, but doesnt seem to accept webp. An alternative would be welcomed, but way too many exists for me to study each single one and help would be appreciated.

Related

How to remove a frame with ffmpeg without re-encoding?

I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise

How do I use ffmpeg to extract frames starting at a specific number?

I've decided for some reason to upscale an entire 90-minute movie using AI. Problem is, I have several demo scenes that have already been upscaled, and I want to keep those frames rather than upscaling them again. Basically I want to export frames starting at a specific number, like ffmpeg -i scene1.mp4 scene1/%10d+[starting number].jpg. If the specified number were 1550, for example, the first frame it would export would be 0000001550.jpg. I still want it to start at the first frame of the input video, though; the only things I want to change are the names of the output files. Is there a way to do this?
Use the -start_number option for image2 muxer.
Use
ffmpeg -i scene1.mp4 -start_number 1550 scene1/%10d.jpg

Keep FFMPEG processing if input fails

I'm trying to save a stream to a video file. If the input stream goes down, FFMPEG automatically stops encoding, but I want to somehow still display those seconds in which the input is down (as a black frame or freezing the last frame).
What I have tried:
ffmpeg -i udp://x.x.x.x:y -c:v copy output.mp4
I wonder if it is possible to keep writing the mp4 file even if the input goes down.
You need to code a special application for this.
It will take the input (will re-encode it if necessary) and will output to ffmpeg.
In the special app, you can check whether is the source is offline or not and act accordingly.
Crucial thing here is PCR values must be continuous, this is why this kind of thing is hard to do or code in general. But it can be done.

Is there some sort of ffmpeg output conversion to ffmpeg commandline?

Say we have many video records that we want merge with -vcodec copy (or equivalent syntax). Without reencoding, without loss of quality. And few records (minor set), with another codecs, parameters and so on. So we can use ffprobe for file, that represent majority of sources. We get lot of information.
But can we get here commandline hints for ffmpeg, that could be used to convert another (not yet "compatible") files to this same format? At least for one selected stream of "master" file, for example.
Question is not about some scpecific output codec and so on.
There is no exsitsing tool to to this. You would need to write one.
Each video stream inside a video file can only include same codec. So I recommend you to at first step, merge files with same codec with -vcodec copy. The check if which codec is mostly available in your merged files (e.g. CodecA). At second step, convert other merged files with other codecs to CodecA. Finally, merge all files (which all have now CodecA) with -vcodec copy.
Please keep in mind that if the video files are in different sizes, you have to reencode them.

How to use ffmpeg to concatenate two videos of different aspect ratios?

I have two videos, one 640x480 and one 480x640 and I want to use ffmpeg to concatenate them together, but I want the resulting video to be 640x640 with both of the videos letterboxed. Is there a way to do this?
How To create mosaic with live stream with ffmpeg ( or with xuggle )
see above for concat
then read the docs on ffmpeg options for 'pad' and 'crop' paying attention to values that must divide by 2 or some such.
you should be able to do what you want but u may have to do separate experiments on the concat and on the padding/cropping to get all the frame sizes correct.

Resources