Still images to video for storage - But back to still images for viewing - ffmpeg

Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.

When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg

Related

Converting image sequence to video with inconsistent frame rate

I recently collected video data where the video was generated as image sequences. However, between different video of the same length, different numbers of frames were acquired, which made me think that the image sequence have varied frame rates between videos. So my question is how do I convert this image sequence back to video with accurate duration between frames. Is there a way to get that information from the date and time it was created using a code? I know ffmpeg seems to be the tools many people use.
I am not sure where to start. I am not very familiar with coding, so already have trouble executing the correct codes.

How would I create a radially offset mosaic of rtsp streams that transitions to a logo

I'm new to stack overflow, but I've been researching how to do this for a couple weeks to no avail. I'm hoping perhaps one of you has some knowledge I haven't seen online yet.
Here is a crude illustration of what I hope to accomplish. I have a video wall of eight monitors - four each of two different sizes. The way it's set up now, all eight monitors are treated together as one big monitor displaying an oddly shaped cutout of a desktop.
Eventually I need each individual monitor to display a separate RTSP stream for about thirty seconds, then have the entire display - all eight monitors in conjunction - to fade out into a large logo.
My problem right now is that I don't know of a way to mask an rtsp stream so it looks like this rather than this, let alone how to arrange them into a weirdly spaced, oddly angled, multiple aspect-ratio mosaic like in the original illustration.
Thank you all for your time. I'm just an intern here without insane technical knowhow, but I'll try to clarify as much as I can.
-J
I believe -filter-complex is one of the ffmpeg CLI flags that you need. You can find many examples online, but here are a few links of interest:
Here's an ffmpeg wiki on creating a mosaic https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
FFMpeg - Combine multiple filter_complex and overlay functions
That should get you started, but you will probably need to add customization depending on frame size and formats.

Split a movie so that each GIF is under a certain file size

Problem
I want to convert a long movie into a series on animated GIFs.
Each GIF needs to be <5MB.
Is there any way to determine how large a GIF will be while it is being encoded?
Progress So Far
I can split the movie into individual frames:
ffmpeg -i movie.ogv -r 25 frameTemp.%05d.gif
I can then use convert from ImageMagick to create GIFs. However, I can't find a way to determine the likely file size before running the command.
Alternatively, I can split the movie into chunks:
ffmpeg -i movie.ogv -vcodec copy -ss 00:00:00 -t 00:20:00 output1.ogv
But I've no way of knowing if, when I convert the file to a GIF it will be under 5MB.
A 10 second scene with a lot of action may be over 5MB (bad!) and a static scene could be under 5MB (not a problem, but not very efficient).
Ideas
I think that what I want to do is convert the entire movie into a GIF, then find a way to split it by file size.
Looking at ImageMagick, I can split a GIF into frames, but I don't see a way to split it into animated GIFs of a certain size / length.
So, is this possible?
There currently is no "Stop at this filesize" option in avconv that i'm aware of. It can, of course, be hacked together quite quickly, but currently libav project doesn't do quick hacks, so it'll likely appear in ffmpeg first.
In addition to this you are facing a problem of animated gif being a very old format, and thus doing some rather strange things. Let me explain the way it normally works:
You create a series of frames from first to last and put them on top of one another.
You make all the "future" frames invisible, and set to appear at the specific time.
In order to make the size of the file smaller, you look "below" the new frames, and if the previous pixel is the same, you set that particular pixel as opaque.
That third step is the only time compression that is done in the animated gif, without it the file size will be much larger (since every pixel must be saved again and again).
However, if you are unsure when was the last break, you cannot determine if the pixel is the same as the previous "frames". After all, this particular frame can be the very first one in the image.
If the limit of 5MiB is soft enough to allow going a little over it, you probably can put something together that just keeps adding frame after frame, and calculating the final file size right away. As soon as one goes over the limit, just stop and use the next frame as the starting point for the next file.

Why are image sequences larger (in size) than the source videos?

When I'm using a command like this in ffmpeg (or any other program):
ffmpeg -i input.mp4 image%d.jpg
The combined files size of all the images always tends to be larger than the video itself. I've tried reducing the frames per second, lower compression settings, blurs, and everything else I can find but the JPEGs always end up being larger in size (combined) afterwards.
I'm trying to understand why this happens, and what can be done to match the size of the video. Are there other compression formats I can use besides JPEG or any settings or tricks I'm overlooking?
Is it even possible?
To simplify, when the video is encoded, only certain images (keyframes) are encoded as full image such as your JPEG.
The rest are encoded as a difference between the current image and the next image, which for most scenes is much less in size comparing to the whole image.
This is because in a video you apply compression not only image by image, by in the time direction as well. So separate images will always be larger than the video. You can't do anything about that.
Lennart is correct, and if you want more detail you should take a look at http://en.wikipedia.org/wiki/Video_compression_picture_types#Summary
Basically sequences of images are I frames only whereas videos can use I frames, P frames and B frames depending on the codec and encode settings, which greatly improves compression efficiency.

Detect frames that have a given image/logo with FFmpeg

I'm trying to split a video by detecting the presence of a marker (an image) in the frames. I've gone over the documentation and I see removelogo but not detectlogo.
Does anyone know how this could be achieved? I know what the logo is and the region it will be on.
I'm thinking I can extract all frames to png's and then analyse them one by one (or n by n) but it might be a lengthy process...
Any pointers?
ffmpeg doesn't have any such ability natively. The delogo filter simply works by taking a rectangular region in its parameters and interpolating that region based on its surroundings. It doesn't care what the region contained previously; it'll fill in the region regardless of what it previously contained.
If you need to detect the presence of a logo, that's a totally different task. You'll need to create it yourself; if you're serious about this, I'd recommend that you start familiarizing yourself with the ffmpeg filter API and get ready to get your hands dirty. If the logo has a distinctive color, that might be a good way to detect it.
Since what you're after is probably going to just be outputting information on which frames contain (or don't contain) the logo, one filter to look at as a model will be the blackframe filter (which searches for all-black frames).
You can write a detect-logo module, Decode the video(YUV 420P FORMAT), feed the raw frame to this module, Do a SAD(Sum of Absolute Difference) on the region where you expect a logo,if SAD is negligible its a match, record the frame number. You can split the videos at these frames.
SAD is done only on Y(luma) frames. To save processing you can scale the video to a lower resolution before decoding it.
I have successfully detect logo using a rpi and coral ai accelerator in conjunction with ffmeg to to extract the jpegs. Crop the image to just the logo then apply to your trained model. Even then you will need to sample a minute or so of video to determine the actual logos identity.

Resources