Extract all frames or frames from a particular interval of a video in android using ffmpeg - ffmpeg

I want to extract all frames from a given time interval in a video. I have tried MediaMetadataretriver.getFrameAtTime() but it is not giving the result that I want. I have seen many posts from people who suggest ffmpeg for the purpose. How does it work in Android? Also, I want to extract the frames as a bitmap or Mat object and insert them into a list and then process them. Can anyone help?

Related

Set specific frame as thumbnail for video?

I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."

Extracting frames from a video without a mouse pointer on it using ffmpeg

I have a video created using ffmpeg, I need to extract frames (screenshots) from the video where the mouse pointer present in the video should not be present in frames extracted from it.
Sounds silly, but is there a way to do this ?

ffmpeg: add single frame to end of video as images are acquired

I wish to add a frame to the end of a video just after it has been captured so I can make a timelapse video as the images are acquired.
So the idea is to take an image, use ffpmeg to make the video by adding each image just after it is aqcuired.
I've seen many questions about adding a set length of time of a logo type image or how to compile a whole bunch of single images to a video but not this.
Anyone got a good idea of what to try?

ffmpeg creating multiple output videos, splitting on gt(scene,x)

I want to split one video up into multiple parts based on detecting the first frame of each shot, by select scene in ffmpeg.
The following entry records the scene frames and creates a photo mosaic out of them. This indicates to me that the select portion is functional, but I want to use this to create many separate videos, each scene it's own video file.
ffmpeg -i video.mpg -vf select='gt(scene\,0.2331)','scale=320x240,tile=1x100' -frames:v preview.png
Thank you. I think I am close, and I am open to any solution.
You should definitely use -ss(stream start time) and -t(number of second of video from the the start time) options, Can you get the time for each of these scene frames? Then you are good to go.

Adding a color filter at specific intervals in ffmpeg

I am looking to add the color filter to a rtmp stream in ffmpeg at specific time intervals, say for 10 seconds every 10 seconds. I have tried two approaches. The first:
-vf "color=#8EABB8#0.9:480x208,select='gte(t,10)*lte(t,20)' [color];[in][color] overlay [out]"
This streams only the 10 seconds indicated by the select and applies the color filter rather than playing the whole stream and applying the filter to just those 20 seconds.
I then learnt about split and fifo and tried this approach:
-vf "[in] split [no-color], fifo, [with-color] overlay [out]; [no-color] fifo, select='gte(t,10)*lte(t,20)' [with-color]"
I would expect this to play the entire stream, and then select the 10 seconds specified so that I can apply filters, but it does the same as first approach and just plays the 10 seconds selected rather than the entire stream.
Thanks in advance.
You changed the order of the streams going into the overlay.
It seems that if a "select"ed stream goes as first input to the overlay filter, overlay also blanks out its output in the non-selected times.
But if you first provide a stable stream to overlay and then the selected, it will output a stream for the whole time.
I tried following set of filters:
-vf "[in]split[B1][B2];[B1]fifo,drawbox=-1:-1:5000:5000:invert:2000,vflip,hflip[B1E];[B2]fifo,select='gte(t,5)'[B2E];[B1E][B2E]overlay[out]"
My version as graph:
_,--[B1]--fifo--drawbox--flip--[B1E]--._
[in]---split--X X--overlay--[out]
‾'--[B2]--fifo--select---------[B2E]--'‾
Your version was (the select filter is the first overlay input!!):
_,--fifo--select---[with-color]--._
[in]---split--X X--overlay--[out]
‾'--[no-color]--fifo-------------'‾
The reason is that
...[B2E];[B1E][B2E]overlay...
and
...,[B1E]overlay...
are equivalent.
But nevertheless there may remain some problems: Do you need the one time or every 10 seconds, e. g.?
As this question discusses, there doesn't appear to be a way to apply video filters to a specific time period of a video stream, short of splitting it into pieces, applying filters, and recombining. Please share if you find a better method.
I'm dealing with similar problem, I used combination of filter slpit, overlay and concat, and it works, you can try it.
-filter_complex "[0:v]split[v1][v2];[v1]select='lt(t,5)',setpts=PTS-STARTPTS[iv1];[v2]select='gte(t,5)'[over];[over][1:v] overlay=W-w:H-h:shortest=1,setpts=PTS-STARTPTS[iv2];[iv1][iv2]concat=n=2:v=1:a=0"
but my problem is, I use gif as second input because it contains transparent color information, but gif file dosn't not contain audios. how can I make a movie with both transparent(or alpha) and audio?

Resources