How to import .AVI files? - wolfram-mathematica

I am trying to import an .avi file for frame processing.
Import["c:\\windows\\clock.avi","Elements"]
Import["c:\\windows\\clock.avi","VideoEncoding"]
Import["c:\\windows\\clock.avi"]
Import["c:\\windows\\clock.avi",{"Frames",{5,6}}]
Out[115]= {Animation,BitDepth,ColorSpace,Data,Duration,FrameCount,FrameRate,
Frames,GraphicsList,ImageList,ImageSize,VideoEncoding}
Out[116]= rle8
Out[117]= {1,2,3,4,5,6,7,8,9,10,11,12}
During evaluation of In[115]:= Import::fmterr: Cannot import data as video format.
During evaluation of In[115]:= Import::fmterr: Cannot import data as video format.
Out[118]= {$Failed,$Failed}
It reports the same error with all avi files I tested.
Any hints?

AVI is a container format. You can encode movies with totally bizar and rare formats and still call it .avi.
You could use a video format converter like freemake to convert your movie into a format Mathematica can use. Check with Internal`$VideoEncodings what kind of internal formats are recognized.
Quite often, Quicktime (.mov) works easiest. AVIs sometimes load just fine, but don't display at all even if I have the correct codec on board and all my players can play it.
If all else fails, you can try VirtualDub. It can read AVIs and split them into separate images, which can easily be imported into mma.
EDIT
I recall from my most recent video project a total failure to read the AVIs I got from having the FireFox plugin DownloadHelper download a certain YouTube movie (though it played in all the players I have, VLC, Media Player Classic, Windows Media player etc.). A conversion by DH to .mov worked but DH inserts its logo into it. So finally I resorted to a download with FreeMake and conversion to individual frames by means of VirtualDub.

Related

Concatenating Smooth Streaming output to a single MP4 file - problems with A/V sync. What is CodecPrivateData?

I have a video in fragmented form which is an output of an Azure Media Services Live Event (Smooth Streaming).
I'm trying to concatenate the segments to get a single MP4 file, however I've run into a A/V sync problem - no matter what I do (time-shifting/speeding up/slowing down/using FFmpeg filters), the audio delay is always floating. To get the output MP4 file, I tried concatenating the segments for video and audio streams (both at OS file level and with FFmpeg) and then muxing with FFmpeg.
I've tried everything I found on the web and I'm always ending up with exactly the same result. What's important, when I play the source from the manifest file, it's all good. That made me skim through the manifest once again, and I realized there's CodecPrivateData value which I'm not using anywhere in the process. What is it? Could it somehow help solving my problem?
Mystery solved: the manifest file contains the list of stream discontinuities, which need to be taken into account when concatenating the streams.

Lua script for mpv - different duration for each file in a directory

I searched and tried possible solutions for a lua script that autoloop some images from one directory. The result should be these images to be launched by mpv(media player) with a different duration.
I know there is an autoload script that takes every image but just 1 second each.
https://github.com/mpv-player/mpv/blob/master/TOOLS/lua/autoload.lua
(working on windows 10 with the script directory for windows: C:\Users\Username\AppData\Roaming\mpv\scripts)
The following is not an exact answer but slightly related. I often had this need to have a image slideshow where images should be shown for variable duration. Most often accompanied by audio. These solutions worked for me.
Matroska format is very helpful for this. In mpv, I accomplished it with a lua script with images as attachment. Then duration list given in a tag. I don't use it actively because I cannot distribute to others. But instead I found the following approach more portable.
This is the concept. You create a mjpeg video with all the jpegs you want to create. Then you have a video player play with variable frame rate. You specify how long each frame should be shown. Only some container formats allow variable frame rate. Matroska container format allows. So wrap your mjpeg encoded video along with timing information in a matroska container.
You can extract jpeg images from mjpeg without any loss.
I used these tools on linux. I am not sure if they exist for windows. They are open-source tools.
This uses the variable frame rate ability of matroska container format.
Make a mjpeg video of all the jpegs in the sequence you wanted.
You can use ffmpeg tool to do that. Be careful with file naming. Any gap in number sequence is unforgiveable for ffmpeg. You may need to specify a container format for mjpeg encoded video. You can use .mkv format as well. I think other formats can also be used. I used .mkv format which is matroska format.
create time-sequence file. Refer to matroska container timestamp file format. I used version-2 format. In that format you specify the time for each frame in milliseconds. One line for each image frame. First line is header specifying the version
Create a matroska container using mktoolnix-gui.
Add the mjpeg encoded video file.
specify the timestamp file.
create an mkv file.
The tool will extract the mjpeg encoded video from the input container. Using the timestamp, it will create a new .mkv container.
Playing this .mkv container will show the images with the required duration. In future if required you can extract the images without any loss from mjpeg encoded format.

ffmpeg read the current segmentation file

I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.

Accessing & Manipulating video frames from .mp4 file in Windows Phone 7 app

As you may know, when you record a video on a windows phone, it is saved as a .mp4. I want to be able to access the video file (even if it's only stored in isolated storage for the app), and manipulate the pixel values for each frame.
I can't find anything that allows me to load a .mp4 into an app, then access the frames. I want to be able to save the manipulated video as .mp4 file as well, or be able to share it.
Has anyone figured out a good set of steps to do this?
My guess was to first load the .mp4 file into a Stream object. From here I don't know what exactly I can do, but I want to get it into a form where I can iterate through the frames, manipulate the pixels, then create a .mp4 with the audio again once the manipulation is completed.
I tried doing the exact same thing once. Unfortunately, there are no publicly available libraries that will help you with this. You will have to write your own code to do this.
The way to go about this would be to first read up on the storage format of mp4 and figure out how the frames are stored there. You can then read the mp4, extract the frames, modify them and stitch them back in the original format.
My biggest concern is that the hardware might not be powerful enough to accomplish this in a sufficiently small amount of time.

How to play multiple mp3/wma files at once?

I have the need to play multiple soundeffects at once in my WP7 app.
I currently have it working with wav files that takes around 5 megabyte, instead of 500kb when coded in wma/mp3.
Current part of the code:
Stream stream = TitleContainer.OpenStream(String.Format("/location/{0}.wav", value)
SoundEffect effect = SoundEffect.FromStream(stream);
effect.Play();
This works great in a loop, preparing all effects, and then playing them.
However, I would really like to use mp3/wma/whatever-codec to slim my xap file down.
I tried to use MediaElement, but it appears that you also can't use that to play multiple files. Also the XNA MediaPlayer can't be instantiated, and as far as I experienced can't be made to play multiple files at once.
The only solution I see left is that I somehow decode the mp3 to wav and feed that Stream to SoundEffect.
Any ideas on how to accomplish the multiple playback? Or suggestions on how to decode mp3 to wav?
On the conversion... sorry - but I don't think there's any api currently available for WMA or MP3 decoding.
Also, I don't think there are any implementations of MP3, WMA or Ogg decoders which are available in pure c# code - all of them I've seen use DirectShow or PInvoke - e.g. see C# Audio Library.
I personally do expect audio/video compression/decompression to be available at some point in the near future in the WP7 APIs - but I can't guess when!
For some simple compression you can try things like shipping mono instead of stereo files, or shipping 8 bit rather than 16 bit audio files - these are easy to convert back to 16 bit (with obvious loss of resolution) on the phone.
Using compression like zip might also help for some sound effects... but I wouldn't expect it to be hugely successful.

Resources