I am converting MPGs into GIF files using ffmpeg. I need to limit the output to 2MB max as this is the maximum GIF size supported on the server. I have tried the -fs switch, however this chops off the video to fit the size. I would rather have the GIF scaled and the quality adjusted. The source videos are limited to 5 seconds, so I'm not trying to fit a video with particularly long duration into a small GIF file size. Some solutions I have seen include checking the file size after conversion, deleting if too large and trying again with modified properties until the desired file size is reached. Is there a more automated approach I could take with ffmpeg?
Related
This question and answer cover how to get the framecount and keyframe count from an AVI file, which is very useful. I've got a raw AVI file and want to count the number of keyframes (equivalent to non-dropped frames for raw AVI), but it takes a long time to process through a raw AVI file.
There is some way to get this information without fully processing the file, as VirtualDub provides both framecount and key framecount in the file information, as well as total keyframe size, almost instantly for a 25-second raw 1920x1080 AVI. But ffprobe requires count_frames to populate nb_read_frames, which takes some good processing time.
I can do some math with the file's size and the frame's width/height/format to get a fairly good estimate of the number of frames, but I'm worried the overhead of the container could be enough to throw the math off for very short clips. (For my 25 second clip, I get 1286.12 frames, when there are really 1286.)
Any thoughts on if there is a way to get this information programatically with ffprobe or ffmpeg without processing the whole file? Or with another API on windows?
I'm using Imagemagick version 7.0.5-4 to perform image processing operations like crop, resize etc with go-graphics library. I also manage a pool of magickwand objects.
Features: Cipher DPC HDRI Modules
Delegates (built-in): bzlib freetype jng jpeg ltdl lzma png tiff xml zlib
The read time of image to MagickWand object as magickWand.ReadImage(<url>) in png images is very high compared that of jpg images. For images of size around 22kb, reading a jpg file takes around 300ms and that of png image takes around 1-2 minutes.
Edited:
When a single request is sent to server, the read operation takes around 20ms, but when on load of 100rps, it goes till 2-4 minutes. This trend is only in png images, not in jpg.
Any ideas on what can be done different in reading png files and how it can be made performant? Its fine to reduce the quality of images to around 60%. Tried options like SetImageDepth but it made no difference.
The compression quality parameter has a different effect and meaning when dealing with PNG files from when dealing with JPEG files.
PNG compression is always lossless and the appearance is never affected by the quality. As I cannot see your images, I would suggest you either don't bother compressing since it will happen anyway, or that you use a quality of 75. If you tell me you are saving cartoons or line drawings, I might advise differently.
Please have a read here and do some experiments yourself with the tradeoff between time and filesize.
I have made you some plots to show the effect on time to compress and compressed size for different quality settings using two different kinds of images - cartoons and photos - so you can see the effect.
Here is a cartoon:
Look at how the quality setting (0-100) affects time and size with JPEG output:
Now look what happens if you use those same quality settings (0-100) when generating PNG output:
Now let's look at compressing an iPhone photo to JPEG:
And when compressing an iPhone photo to a PNG:
Hopefully you can see that using one quality setting from your config file for both PNG and JPEG and with photos and cartoons/line drawings is not ideal.
Problem
I want to convert a long movie into a series on animated GIFs.
Each GIF needs to be <5MB.
Is there any way to determine how large a GIF will be while it is being encoded?
Progress So Far
I can split the movie into individual frames:
ffmpeg -i movie.ogv -r 25 frameTemp.%05d.gif
I can then use convert from ImageMagick to create GIFs. However, I can't find a way to determine the likely file size before running the command.
Alternatively, I can split the movie into chunks:
ffmpeg -i movie.ogv -vcodec copy -ss 00:00:00 -t 00:20:00 output1.ogv
But I've no way of knowing if, when I convert the file to a GIF it will be under 5MB.
A 10 second scene with a lot of action may be over 5MB (bad!) and a static scene could be under 5MB (not a problem, but not very efficient).
Ideas
I think that what I want to do is convert the entire movie into a GIF, then find a way to split it by file size.
Looking at ImageMagick, I can split a GIF into frames, but I don't see a way to split it into animated GIFs of a certain size / length.
So, is this possible?
There currently is no "Stop at this filesize" option in avconv that i'm aware of. It can, of course, be hacked together quite quickly, but currently libav project doesn't do quick hacks, so it'll likely appear in ffmpeg first.
In addition to this you are facing a problem of animated gif being a very old format, and thus doing some rather strange things. Let me explain the way it normally works:
You create a series of frames from first to last and put them on top of one another.
You make all the "future" frames invisible, and set to appear at the specific time.
In order to make the size of the file smaller, you look "below" the new frames, and if the previous pixel is the same, you set that particular pixel as opaque.
That third step is the only time compression that is done in the animated gif, without it the file size will be much larger (since every pixel must be saved again and again).
However, if you are unsure when was the last break, you cannot determine if the pixel is the same as the previous "frames". After all, this particular frame can be the very first one in the image.
If the limit of 5MiB is soft enough to allow going a little over it, you probably can put something together that just keeps adding frame after frame, and calculating the final file size right away. As soon as one goes over the limit, just stop and use the next frame as the starting point for the next file.
When I'm using a command like this in ffmpeg (or any other program):
ffmpeg -i input.mp4 image%d.jpg
The combined files size of all the images always tends to be larger than the video itself. I've tried reducing the frames per second, lower compression settings, blurs, and everything else I can find but the JPEGs always end up being larger in size (combined) afterwards.
I'm trying to understand why this happens, and what can be done to match the size of the video. Are there other compression formats I can use besides JPEG or any settings or tricks I'm overlooking?
Is it even possible?
To simplify, when the video is encoded, only certain images (keyframes) are encoded as full image such as your JPEG.
The rest are encoded as a difference between the current image and the next image, which for most scenes is much less in size comparing to the whole image.
This is because in a video you apply compression not only image by image, by in the time direction as well. So separate images will always be larger than the video. You can't do anything about that.
Lennart is correct, and if you want more detail you should take a look at http://en.wikipedia.org/wiki/Video_compression_picture_types#Summary
Basically sequences of images are I frames only whereas videos can use I frames, P frames and B frames depending on the codec and encode settings, which greatly improves compression efficiency.
I'd like to save an existing image as a PNG or JPG at a given file size, eg, 100KB.
PNG uses lossless compression so you can not compress it below a certain level.
In .NET you can save a JPG with compression, and guess how big the file will be when completed.
http://msdn.microsoft.com/en-us/library/system.drawing.image.save(VS.80).aspx
- See the "Save JPEG image with compression value" section.
Also, you could resize the image dimensions to make it smaller.
Only if using JPG 2000 you could set the file size to a specific value. Using JPG, you'll have to try different quality values and using PNG will get you one size for a given image and a compression level - you can only try to resize the image which will give you a smaller size.
You could also try to resize the image so that an uncompressed image would have the size you want, but then PNG and especially JPG will often have a much lower file size.
For PNG there isn't really a quality setting, so you can't really control the file size.
Jpg has a quality setting that determines how good a quality the image will be. lower quality settings result in smaller files. However, there is normally no option for "give the quality needed for a file of size x".
You can achieve the same result using a rather inefficient approach of converting to jpg in memory, seeing how big the output is, adjusting the quality up or down and repeating until you get close enough. It might sound terrible, but if your images aren't too big you may find no one notices the short delay while you do this.