ffplay seek function - ffmpeg

I'm trying to figure out how the seek function using the left/right arrows in ffplay works
i went inside thier open source code and tried to change the values from 10,-10 to different values so i can see if the seek moves correctly but after few attempts i saw that the movie postion after using either left or right arrow isnt moving to exactly the value i specified.
For example, if i used the default value 10, and the movie was on 00:10:00, after pressing the right arrow which suppose to move the movie to 00:20:00 i got something like 00:21:35 and it was not constant.
I tried that on varity of movies and got diffrenet results each time.
Anyone has any idea what i'm doing wrong? or can explain how the seek works in ffplay?

Video seeking precision depends on a variety of factors, but mainly PTS, DTS, and GOP Length. a GOP (Group of Pictures) starts with an I frame (or fixed picture). When you seek, it's just probably trying to find the closest I frame that has a PTS (Presentation Timestamp) grater than 20. What complicates things even further is that not all videos have a fixed GOP length (also called closed GOP) so seeking 10 seconds further in different positions will not always add 11.35 seconds.
Check out this article on GOP
http://en.wikipedia.org/wiki/Group_of_pictures

Related

Need to create a longer version of a video by looping it, all while keeping the frame rate, bitrate, and audio quality. not to mention the resolution

I have this 5 minute long 24fps video, and I need to extend it's length by a factor of some integer number, something like 12, though preferably 24. I need to keep the frame rate the same, which I have achieved through moviepy, but the bit rate has changed. The whole point of this is to create a video that is equally as intensive on an APU, but is longer. I don't understand the implications of higher bitrates, data nor total, but I need to. Is there a way to concatenate a video with itself and maintain those values in the 'details' tab of properties. keep in mind I'm doing low power measurements.
I tried Microsoft Clipchamp, but that gave me 30fps. I looked for other video editors that are free but none gave 24fps. I tried moviepy which gave 24fps but much lower bitrates.

Split a movie so that each GIF is under a certain file size

Problem
I want to convert a long movie into a series on animated GIFs.
Each GIF needs to be <5MB.
Is there any way to determine how large a GIF will be while it is being encoded?
Progress So Far
I can split the movie into individual frames:
ffmpeg -i movie.ogv -r 25 frameTemp.%05d.gif
I can then use convert from ImageMagick to create GIFs. However, I can't find a way to determine the likely file size before running the command.
Alternatively, I can split the movie into chunks:
ffmpeg -i movie.ogv -vcodec copy -ss 00:00:00 -t 00:20:00 output1.ogv
But I've no way of knowing if, when I convert the file to a GIF it will be under 5MB.
A 10 second scene with a lot of action may be over 5MB (bad!) and a static scene could be under 5MB (not a problem, but not very efficient).
Ideas
I think that what I want to do is convert the entire movie into a GIF, then find a way to split it by file size.
Looking at ImageMagick, I can split a GIF into frames, but I don't see a way to split it into animated GIFs of a certain size / length.
So, is this possible?
There currently is no "Stop at this filesize" option in avconv that i'm aware of. It can, of course, be hacked together quite quickly, but currently libav project doesn't do quick hacks, so it'll likely appear in ffmpeg first.
In addition to this you are facing a problem of animated gif being a very old format, and thus doing some rather strange things. Let me explain the way it normally works:
You create a series of frames from first to last and put them on top of one another.
You make all the "future" frames invisible, and set to appear at the specific time.
In order to make the size of the file smaller, you look "below" the new frames, and if the previous pixel is the same, you set that particular pixel as opaque.
That third step is the only time compression that is done in the animated gif, without it the file size will be much larger (since every pixel must be saved again and again).
However, if you are unsure when was the last break, you cannot determine if the pixel is the same as the previous "frames". After all, this particular frame can be the very first one in the image.
If the limit of 5MiB is soft enough to allow going a little over it, you probably can put something together that just keeps adding frame after frame, and calculating the final file size right away. As soon as one goes over the limit, just stop and use the next frame as the starting point for the next file.

How to select the specific frame with object

I am detecting the object from the live camera through feature detection with svm , and it read every frame from camera while predicting which affect its speed , i just want that it should select the frame which contain the object and ignore other frames which have no object like empty street or standing car's , it should only detect the moving object
For example , If the object came into camera in 6th frame , it contain into the camera till many frames until it goes out from camera's range , so it should not recount the same object and ignore that frames.
Explanation :
I am detecting the vehicle from video , i want to ignore the empty frames , but how to ignore them ? i only want to check the frames which contain object like vehicle , but if the vehicle is passing from video it take approximately lets assume 5 sec , than it mean same object take 10 frames , so the program count it as 10 vehicles , one from each frame , i want to count it as 1 , because its the one (SAME) vehicle which use 10 frames
My video is already in background subtraction form
I explore two techniques :
1- Entropy ( Frame subtraction )
2- Keyframe extraction
This question is confusingly worded. What output do you want from this analysis? Here's the stages I see:
1) I assume each frame gives you an (x,y) or null for the position of each object in the frame. Can you do this?
2) If you might get multiple objects in a frame, you have to match them with objects in the previous frame. If this is not a concern, skip to (3). Otherwise, assign an index to each object in the first frame they appear. In subsequent frames, match each object to the index in the previous frame based on (x,y) distance. Clumsy, but it might be good enough.
3) Calculating velocity. Look at the difference in (x,y) between this frame and the last one. Of course, you can't do this on the first frame. Maybe apply a low-pass filter to position to smooth out any jittery motion.
4) Missing objects. This is a hard one. If your question is how to treat empty frames with no object in them, then I feel like you just ignore them. But, if you want to track objects that go missing in the middle of a trajectory (like maybe a ball with motion blur) then that is harder. If this is what you're going for, you might want to do object matching by predicting the next position using position, velocity, and maybe even object characteristics (like a histogram of hues).
I hope this was helpful.
You need an object tracker (many examples can be found on the web for tracking code). Then what you are looking for is the number of tracks. That's your answer.

Animated GIF - avoid storing repeated frames twice

I have an animated gif much like this one, where the cyclic nature of the animation causes some frames to be repeated twice within one full loop of the animation.
(From here)
I am currently storing each frame separately in the gif. Is it possible to only store each repeated frame once, to effectively halve the storage space required?
I am creating my gif in MATLAB using the movie2gif converter, but would be happy with an alternative method for gif creation or a post-processing tool.
EDIT
What I mean by the frame repetition is best explained in the context of this example image. There is a frame shown just as the left-hand ball leaves the row of balls. That exact frame is repeated a few frames later, when the left-hand ball is now on its way back to hit the row of balls again. Because of the ordering of frames, it is currently needed to store this
frame twice.
To clarify what I am looking for: I want a way of saving the gif (or post-processing the gif) such that I can keep the full animation sequence (e.g. of 30 frames), but frames which repeat are soft-linked back to the first showing of them, thus removing the need to store them twice.
Judging from the description of movie2gif and its input arguements, it does not appear to be possible. Furthermore, when reading how gifs work (and LZW) compression I can imagine that it is not even possible to reduce the size of a gif like this.
If you want to save only the images that are minimally required and don't mind building the image before you can see it, then you can just store each image and an indexing vector.
In your case it may be possible to find a way to just save half of the image, and then play it in a cycle: forward-backward-forward ... but I don't know whether this is possible.

Detect frames that have a given image/logo with FFmpeg

I'm trying to split a video by detecting the presence of a marker (an image) in the frames. I've gone over the documentation and I see removelogo but not detectlogo.
Does anyone know how this could be achieved? I know what the logo is and the region it will be on.
I'm thinking I can extract all frames to png's and then analyse them one by one (or n by n) but it might be a lengthy process...
Any pointers?
ffmpeg doesn't have any such ability natively. The delogo filter simply works by taking a rectangular region in its parameters and interpolating that region based on its surroundings. It doesn't care what the region contained previously; it'll fill in the region regardless of what it previously contained.
If you need to detect the presence of a logo, that's a totally different task. You'll need to create it yourself; if you're serious about this, I'd recommend that you start familiarizing yourself with the ffmpeg filter API and get ready to get your hands dirty. If the logo has a distinctive color, that might be a good way to detect it.
Since what you're after is probably going to just be outputting information on which frames contain (or don't contain) the logo, one filter to look at as a model will be the blackframe filter (which searches for all-black frames).
You can write a detect-logo module, Decode the video(YUV 420P FORMAT), feed the raw frame to this module, Do a SAD(Sum of Absolute Difference) on the region where you expect a logo,if SAD is negligible its a match, record the frame number. You can split the videos at these frames.
SAD is done only on Y(luma) frames. To save processing you can scale the video to a lower resolution before decoding it.
I have successfully detect logo using a rpi and coral ai accelerator in conjunction with ffmeg to to extract the jpegs. Crop the image to just the logo then apply to your trained model. Even then you will need to sample a minute or so of video to determine the actual logos identity.

Resources