What container is easiest for combining JPEGS and MP3s as video? - algorithm

So I have N (for example, 1000) JPEG frames and 10*N ( for example, 100) seconds of MP3 sound. I need some container for joining them into one video file (at 10 frames/second) (popular containers like FLV or AVI or MOV are better). So what I need is an algorithm or code example of combining my data into some popular format. The code example should be in some language like C#, Java, ActionScript or PHP. The algorithm should be theoretically implementable with ActionScript or PHP.
Can any one, please help me with that?

If you're more concerned about simplicity than anything else, Motion JPEG is probably what you want, combined with the MP3 in an AVI container.
Your best option is really, really to use an existing library to do the encoding, at least for the container, though - if you do it yourself, you're going to have to write a lot of code to handle things like interleaving video and audio, sync, etc etc.

Related

Reducing FFmpeg dlls to only what is used?

I have written Windows software that calls the FFmpeg dlls to encode a sequence of images in a few different formats (animated gif, animated png, mpeg4, wmv, webm). I need to provide the dlls with my software but they significantly increase the download size. Even after zipping everything they increase the size from around 5MB to around 20MB. This isn't a huge problem but I'd like to get the download size down as much as possible.
How easy is it to do this and by roughly how much would I be able to reduce them? Note that I don't need any decoders and am only encoding those 5 formats. I'm not using any special filters and the encoded videos don't have sound. I'd like to know if it's worth it before I start trying to compile the FFmpeg source and playing with configuration flags.

How would I create a radially offset mosaic of rtsp streams that transitions to a logo

I'm new to stack overflow, but I've been researching how to do this for a couple weeks to no avail. I'm hoping perhaps one of you has some knowledge I haven't seen online yet.
Here is a crude illustration of what I hope to accomplish. I have a video wall of eight monitors - four each of two different sizes. The way it's set up now, all eight monitors are treated together as one big monitor displaying an oddly shaped cutout of a desktop.
Eventually I need each individual monitor to display a separate RTSP stream for about thirty seconds, then have the entire display - all eight monitors in conjunction - to fade out into a large logo.
My problem right now is that I don't know of a way to mask an rtsp stream so it looks like this rather than this, let alone how to arrange them into a weirdly spaced, oddly angled, multiple aspect-ratio mosaic like in the original illustration.
Thank you all for your time. I'm just an intern here without insane technical knowhow, but I'll try to clarify as much as I can.
-J
I believe -filter-complex is one of the ffmpeg CLI flags that you need. You can find many examples online, but here are a few links of interest:
Here's an ffmpeg wiki on creating a mosaic https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
FFMpeg - Combine multiple filter_complex and overlay functions
That should get you started, but you will probably need to add customization depending on frame size and formats.

Video formats with lossless interpolation frames?

I'm playing with image-optimized diffs for storing edits to artwork. Version control seems to treat images as binary blobs, which means changes to common compressed formats like PNG/JPEG rewrite ≈90% of the file, so updates eat roughly the same space as storing separate files anyway.
Instead of writing some bit-twiddling code myself, I had an idea. We already have highly optimized algorithms for storing differences between images: video codecs.
What video codecs out there allow for lossless reconstruction through their interpolation (“b”) frames? The most common ones all understandably err on the lossy side.
For example, HEVC lossless mode – the encoder will find optimal inter or intra predictions and then losslessly code the residual.
(Moved from a comment.)

Still images to video for storage - But back to still images for viewing

Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.
When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg

Forcing custom H.264 intra-frames (keyframes) at encode-time?

I have a video sequence that I'd like to skip to specific frames at playback-time (my player is implemented using AVPlayer in iOS, but that's incidental). Since these frames will fall at unpredictable intervals, I can't use the standard "keyframe every N frames/seconds" functionality present in most video encoders. I do, however, know the target frames in advance.
In order to do this skipping as efficiently as possible, I need to force the target frames to be i-frames at encode time. Ideally in some kind of GUI which would let me scrub to a frame, mark it as a keyframe, and then (re)encode my video.
If such a tool isn't available, I have the feeling this could probably be done by rolling a custom encoder with libavcodec, but I'd rather use a higher-level (and preferably scriptable) tool to do the job if a GUI isn't possible. Is this the kind of task ffmpeg or mencoder can be bent to?
Does anybody have a technique for doing this? Also, it's entirely possible that this is an impossible task because of some fundamental ignorance I have of the h.264 codec. If so, please do put me right.
ffmpeg has a -force_key_frames option that accepts a series of arbitrary timestamps as well as other ways to specify the frames. From the documentation:
-force_key_frames 0:05:00,...
Answered my own question: it's possible to set custom compression keyframes in Apple Compressor.
Compression markers are also known as manual compression markers. These are markers you can add to a Final Cut Pro sequence (or in the Compressor Preview window) to indicate when Compressor should generate an MPEG I-frame during compression.
Source.
Could you not use chapter markers to jump between sections? Not an ideal solution but a lot easier to achieve.
You can use this software:
http://www.applesolutions.com/bantha/MH.html

Resources