I have written Windows software that calls the FFmpeg dlls to encode a sequence of images in a few different formats (animated gif, animated png, mpeg4, wmv, webm). I need to provide the dlls with my software but they significantly increase the download size. Even after zipping everything they increase the size from around 5MB to around 20MB. This isn't a huge problem but I'd like to get the download size down as much as possible.
How easy is it to do this and by roughly how much would I be able to reduce them? Note that I don't need any decoders and am only encoding those 5 formats. I'm not using any special filters and the encoded videos don't have sound. I'd like to know if it's worth it before I start trying to compile the FFmpeg source and playing with configuration flags.
Related
I'm playing with image-optimized diffs for storing edits to artwork. Version control seems to treat images as binary blobs, which means changes to common compressed formats like PNG/JPEG rewrite ≈90% of the file, so updates eat roughly the same space as storing separate files anyway.
Instead of writing some bit-twiddling code myself, I had an idea. We already have highly optimized algorithms for storing differences between images: video codecs.
What video codecs out there allow for lossless reconstruction through their interpolation (“b”) frames? The most common ones all understandably err on the lossy side.
For example, HEVC lossless mode – the encoder will find optimal inter or intra predictions and then losslessly code the residual.
(Moved from a comment.)
I'd like to batch convert my entire photo library from NEF/RAW format to a more suitable format for storage. By that I mean I would like to keep the higher bitrate data, potentially for future 'developing', but I want a smaller file footprint. I realize I could just zip them into an archive but I would prefer they were still in a browsable format.
I'm currently considering going with JPEG XR (i.e. HD Photo) since it supports HDR bitrates (giving me some good room to change exposures in the future) and decent enough lossy and lossless compression (though I'm not sure HDR will work with lossy). I'm also aware of the WebP format but while it's compression and quality are phenomenal it will not work for storing my HDR data intact. I realize the demosaicing data is gone if I don't use NEF/RAW but it's a compromise I'm willing to make as long as I can keep that higher bitrate data. I've considered TIFF as well but ruled it out due to it only supporting lossless compression.
Can anyone recommend an alternative to these formats or perhaps comment on their own experience with the JXR format, specifically using the MS JXRlib?
Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.
When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg
I have a video sequence that I'd like to skip to specific frames at playback-time (my player is implemented using AVPlayer in iOS, but that's incidental). Since these frames will fall at unpredictable intervals, I can't use the standard "keyframe every N frames/seconds" functionality present in most video encoders. I do, however, know the target frames in advance.
In order to do this skipping as efficiently as possible, I need to force the target frames to be i-frames at encode time. Ideally in some kind of GUI which would let me scrub to a frame, mark it as a keyframe, and then (re)encode my video.
If such a tool isn't available, I have the feeling this could probably be done by rolling a custom encoder with libavcodec, but I'd rather use a higher-level (and preferably scriptable) tool to do the job if a GUI isn't possible. Is this the kind of task ffmpeg or mencoder can be bent to?
Does anybody have a technique for doing this? Also, it's entirely possible that this is an impossible task because of some fundamental ignorance I have of the h.264 codec. If so, please do put me right.
ffmpeg has a -force_key_frames option that accepts a series of arbitrary timestamps as well as other ways to specify the frames. From the documentation:
-force_key_frames 0:05:00,...
Answered my own question: it's possible to set custom compression keyframes in Apple Compressor.
Compression markers are also known as manual compression markers. These are markers you can add to a Final Cut Pro sequence (or in the Compressor Preview window) to indicate when Compressor should generate an MPEG I-frame during compression.
Source.
Could you not use chapter markers to jump between sections? Not an ideal solution but a lot easier to achieve.
You can use this software:
http://www.applesolutions.com/bantha/MH.html
So I have N (for example, 1000) JPEG frames and 10*N ( for example, 100) seconds of MP3 sound. I need some container for joining them into one video file (at 10 frames/second) (popular containers like FLV or AVI or MOV are better). So what I need is an algorithm or code example of combining my data into some popular format. The code example should be in some language like C#, Java, ActionScript or PHP. The algorithm should be theoretically implementable with ActionScript or PHP.
Can any one, please help me with that?
If you're more concerned about simplicity than anything else, Motion JPEG is probably what you want, combined with the MP3 in an AVI container.
Your best option is really, really to use an existing library to do the encoding, at least for the container, though - if you do it yourself, you're going to have to write a lot of code to handle things like interleaving video and audio, sync, etc etc.