MP3 frames in an AVI file - data-structures

I have read the following document to understand how is structured an AVI file :
http://www.alexander-noe.com/video/documentation/avi.pdf
An AVI file is a container of streams.
An AVI file can contain a MP3 audio stream.
Here is how I have understood data structures of a MP3 audio stream in an AVI file :
I have also read the following web page to understand how is structured a MP3 file :
http://en.wikipedia.org/wiki/MP3#File_structure
So a MP3 file is a sequence of MP3 frames.
Each MP3 frame is made up of a header and data.
To create a MP3 file from a MP3 stream in an AVI file I guess that MP3 headers can be built up with data contained in the MPEGLAYER3FORMAT structure.
But I am wondering if 1 audio chunk structure matches data of 1 MP3 frame.

Old post, but for other coders, a really good way to obtain a specific multimedia format without reading the AVI or the MP3 bible is to play with ffmpeg.
Choose an MP3 file from 10 seconds.
Choose a video exactly in the codec you want to use.
Mix them using this command:
ffmpeg -i mysound.mp3 -i myvideo.avi -acodec copy -vcodec copy myResult.avi
...then observe how the mp3 has been placed in the AVI file using an hexadecimal editor.
You'll notice that the sound has been placed in several chunks after each video frame, and that an index has been built at the end of the AVI for both the image frames and the audio samples. Once you're comfortable with the notion of index in this specific context (with this specific codec mixed with an MP3), then you can build your files by copying this pattern.

Related

How to download live stream that consists out of separate .ts and .aac segments?

There is a m3u8 file with only the containing links to video segments and a different one with only the audio segments. Because it is a live streaming I have to start downloading both video and audio streams at the same time.
When just writing "ffmpeg input output" for video and the same command for audio in the following line, the program is trying to download the video file "until the end" before starting the audio stream -- which does naturally not work since the live stream is indefinite.

Speed up video encoding

My task is to create html5 compatible video from input video (.avi, .mov, .mp4, etc.). My understanding is that my output should be .webm or .mp4 (H264 video, aac audio).
I use ffmpeg for conversion and it takes a lot of time. I wonder if I could use ffprobe to test if input video is "H264" and "aac" and if so then maybe I could just copy video/audio into output without modifications.
I.e. I have next idea:
Get input video info using ffprobe:
ffprobe {input} -v quiet -show_entries stream=codec_name,codec_type -print_format json
The result would be JSON like this:
"streams": [
{codec_name="mjpeg",codec_type="video"},
{codec_name="aac",codec_type="audio"}
]
If JSON tells that video codec is h264 then I think I could just copy video stream. If JSON tells that audio codec is h264 aac then I think I could just copy audio stream.
JSON above tells that my audio is "aac". So I think I could just copy audio stream into ouput video but still needs video stream conversion. For the above example my ffmpeg command would be like:
ffmpeg -i {input} -c:v libx264 -c:a copy ouput.mp4
The question is if I could always use this idea to produce html5 compatible video and if this method will actually speed up video conversion.
The question is if I could always use this idea to produce html5 compatible video
Probably, but some caveats:
Your output may use H.264 High profile, but your target device may not support that (but that is not too likely now).
Ensure that the pixel format is yuv420p. If it is not then it may not play and you will have to re-encode with -vf format=yuv420p. You can check with pix_fmt in your -show_entries stream.
If the file is directly from a video camera, or some other device with inefficient encoding, then the file size may be excessively large for your viewer.
Add -movflags +faststart to your command so the video can begin playback before the file is completely downloaded.
and if this method will actually speed up video conversion.
Yes, because you're only stream copying (re-muxing) which is fast, and not re-encoding some/all streams which is slow.

What does Elementary Stream mean in Terms of H264

I read what an Elementary Stream is on Wikipedia. A tool i am using "Live555" is demanding "H.264 Video Elementary Stream File". So when exporting a Video from a Video Application, do i have to choose specific preferences to generate a "Elementery Stream" ?
If you're using ffmpeg you could use something similar to the following:
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -vcodec libx264 -f h264 test.264
You'll have to adapt the command line for the file type you're exporting the video from.
This generates a file containing H.264 access units where each access unit consists of one or more NAL units with each NAL unit prefixed with a start code (0001 or 001). You can open the file using a hex editor to take a look at it.
You can also create an H.264 elementary stream file (.264) by using the the H.264 reference encoder from raw YUV input files.
If you copy the generated .264 file into the live555 testOnDemandRTSPServer directory, you can test streaming the file over RTSP/RTP.
Can you give some references to read more about NAL / H.264 elementary Stream. How can I quickly check if the stream is an elementary stream?
Generally anything in a container (avi or mp4) is not an elementary stream. The typical extension used for elementary streams is ".264". The quickest way to double check that a file is an elementary stream is to open the file in a hex editor and look for a start code at the beginning of the file (00000001). Note that there should be 3 (000001) and 4 (00000001) byte start codes through out the file (before every NAL unit)
Why does live555 not play h264 streams which are not elementary?
This is purely if live555 has not implemented the required demux (e.g. avi or mp4). AFAIK live555 does support demuxing H.264 from the matroska container.

What parameters or software is best to use to convert .MP4 to .FLV

I'm on Windows 7 and i have many .MP4 video that i want to convert on .flv. I have try ffmpeg and Free FLV converter, but each time the results are not what i'm looking for.
I want a video of same quality (or almost, looking good) and a more little size for the video, because right now, each time i have made a try, the video result is pretty bad and the video size just increase.
How can i have a good looking video, less in size and in .FLV ?
Thanks a lot !
First, see slhck's blog post on superuser for a good FFmpeg tutorial. FLV is a container format and can support several different video formats such as H.264 and audio formats such as AAC and MP3. The MP4 container can also support H.264 and AAC, so if your input uses these formats then you can simply "copy and paste" the video and audio from the mp4 to the flv. This will preserve the quality because there is no re-encoding. These two examples do the same thing, which is copying video and audio from the mp4 to the flv, but the ffmpeg syntax varies depending on your ffmpeg version. If one doesn't work then try the other:
ffmpeg -i input.mp4 -c copy output.flv
ffmpeg -i input.mp4 -vcodec copy -acodec copy output.flv
However, you did not supply any information about your input, so these examples may not work for you. To reduce the file size you will need to re-encode. The link I provided shows how to do that. Pay special attention to the Constant Rate Factor section.

Save Live Video Stream To Local Storage

Problem:
I have to save live video streams data which come as an RTP packets from RTSP Server.
The data come in two formats : MPEG4 and h264.
I do not want to encode/decode input stream.
Just write to a file which is playable with proper codecs.
Any advice?
Best Wishes
History:
My Solutions and their problems:
Firt attempt: FFmpeg
I use FFmpeg libary to get audio and video rtp packets.
But in order to write packets i have to use av_write_frame :
which seems that decode /encode takes place.
Also, when i give output format as mp4 ( av_guess_format("mp4", NULL, NULL)
the output file is unplayable.
[ any way ffmpeg has bad doc. hard to find what is wrong]
Second attempth: DirectShow
Then i decide to use DirectShow. I found a RTSP Source Filter.
Then a Mux and File Writer.
Cretae Single graph:
RTSP Source --> MPEG MUX ---> File Writer
It worked but the problem is that the output file is not playable
if graph is not stoped. If something happens, graph crashed for example
the output file is not playable
Also i can able to write H264 data, but the video is completely unplayable.
The MP4 file format has an index that is required for correct playback, and the index can only be created once you've finished recording. So any solution using MP4 container files (and other indexed files) is going to suffer from the same problem. You need to stop the recording to finalise the file, or it will not be playable.
One solution that might help is to break the graph up into two parts, so that you can keep recording to a new file while stopping the current one. There's an example of this at www.gdcl.co.uk/gmfbridge.
If you try the GDCL MP4 multiplexor, and you are having problems with H264 streams, see the related question GDCL Mpeg-4 Multiplexor Problem

Resources