I read what an Elementary Stream is on Wikipedia. A tool i am using "Live555" is demanding "H.264 Video Elementary Stream File". So when exporting a Video from a Video Application, do i have to choose specific preferences to generate a "Elementery Stream" ?
If you're using ffmpeg you could use something similar to the following:
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -vcodec libx264 -f h264 test.264
You'll have to adapt the command line for the file type you're exporting the video from.
This generates a file containing H.264 access units where each access unit consists of one or more NAL units with each NAL unit prefixed with a start code (0001 or 001). You can open the file using a hex editor to take a look at it.
You can also create an H.264 elementary stream file (.264) by using the the H.264 reference encoder from raw YUV input files.
If you copy the generated .264 file into the live555 testOnDemandRTSPServer directory, you can test streaming the file over RTSP/RTP.
Can you give some references to read more about NAL / H.264 elementary Stream. How can I quickly check if the stream is an elementary stream?
Generally anything in a container (avi or mp4) is not an elementary stream. The typical extension used for elementary streams is ".264". The quickest way to double check that a file is an elementary stream is to open the file in a hex editor and look for a start code at the beginning of the file (00000001). Note that there should be 3 (000001) and 4 (00000001) byte start codes through out the file (before every NAL unit)
Why does live555 not play h264 streams which are not elementary?
This is purely if live555 has not implemented the required demux (e.g. avi or mp4). AFAIK live555 does support demuxing H.264 from the matroska container.
Related
My task is to create html5 compatible video from input video (.avi, .mov, .mp4, etc.). My understanding is that my output should be .webm or .mp4 (H264 video, aac audio).
I use ffmpeg for conversion and it takes a lot of time. I wonder if I could use ffprobe to test if input video is "H264" and "aac" and if so then maybe I could just copy video/audio into output without modifications.
I.e. I have next idea:
Get input video info using ffprobe:
ffprobe {input} -v quiet -show_entries stream=codec_name,codec_type -print_format json
The result would be JSON like this:
"streams": [
{codec_name="mjpeg",codec_type="video"},
{codec_name="aac",codec_type="audio"}
]
If JSON tells that video codec is h264 then I think I could just copy video stream. If JSON tells that audio codec is h264 aac then I think I could just copy audio stream.
JSON above tells that my audio is "aac". So I think I could just copy audio stream into ouput video but still needs video stream conversion. For the above example my ffmpeg command would be like:
ffmpeg -i {input} -c:v libx264 -c:a copy ouput.mp4
The question is if I could always use this idea to produce html5 compatible video and if this method will actually speed up video conversion.
The question is if I could always use this idea to produce html5 compatible video
Probably, but some caveats:
Your output may use H.264 High profile, but your target device may not support that (but that is not too likely now).
Ensure that the pixel format is yuv420p. If it is not then it may not play and you will have to re-encode with -vf format=yuv420p. You can check with pix_fmt in your -show_entries stream.
If the file is directly from a video camera, or some other device with inefficient encoding, then the file size may be excessively large for your viewer.
Add -movflags +faststart to your command so the video can begin playback before the file is completely downloaded.
and if this method will actually speed up video conversion.
Yes, because you're only stream copying (re-muxing) which is fast, and not re-encoding some/all streams which is slow.
I have the code of a simple h264 encoder, which outputs a raw 264 file. I want to extend it to directly output the video in a playable container; it doesn't matter which one as long as it is playable by VLC. So, what is the easiest way to include a wrapper around this raw H264 file?
Everywhere I looked on the web, people used ffmpeg and libavformat, but I would prefer to have standalone code. I do not want fancy stuff like audio, subtiltes, chapters etc., just the video stream.
Thanks!
You can output a .264 directly by writing the Elementary stream to a file in AnnexB format. That is, write each NALU to the file separated by start codes (0x00000001). But make sure the stream writes SPS and PPS before the first IDR>
mkv, mpeg-ts, mp4 (you can use libMP4v2)
I wanna use ffmpeg to convert yuv raw video file into ts stream video file.So I do this in my code:
avcodec_find_encoder(AV_CODEC_ID_MPEG2TS);
But when I run it ,it occurs that:
[NULL # 0x8832020] No codec provided to avcodec_open2()
I change the "AV_CODEC_ID_MPEG2TS" into "AV_CODEC_ID_MPEG2VIDEO", it works well ,and generate a mpg file running well too.So I wanna ask why I cannot use "AV_CODEC_ID_MPEG2TS"?
I'm also looking for streaming a file with ffmpeg so I'm not sure about that but it is what I understand....
Mpeg TS (Transport Stream) is not a codec, it is an encapsulation method, so you have to encode the stream with some code (I'm not sure if you can chose any codec) and then you can encapsulate it with mpeg ts before transmit over the network.
If you don't need to transmit the stream over the network maybe you don't need mpeg ts.
I hope this will helpful....!
Look here: ffmpeg doxygen
I have read the following document to understand how is structured an AVI file :
http://www.alexander-noe.com/video/documentation/avi.pdf
An AVI file is a container of streams.
An AVI file can contain a MP3 audio stream.
Here is how I have understood data structures of a MP3 audio stream in an AVI file :
I have also read the following web page to understand how is structured a MP3 file :
http://en.wikipedia.org/wiki/MP3#File_structure
So a MP3 file is a sequence of MP3 frames.
Each MP3 frame is made up of a header and data.
To create a MP3 file from a MP3 stream in an AVI file I guess that MP3 headers can be built up with data contained in the MPEGLAYER3FORMAT structure.
But I am wondering if 1 audio chunk structure matches data of 1 MP3 frame.
Old post, but for other coders, a really good way to obtain a specific multimedia format without reading the AVI or the MP3 bible is to play with ffmpeg.
Choose an MP3 file from 10 seconds.
Choose a video exactly in the codec you want to use.
Mix them using this command:
ffmpeg -i mysound.mp3 -i myvideo.avi -acodec copy -vcodec copy myResult.avi
...then observe how the mp3 has been placed in the AVI file using an hexadecimal editor.
You'll notice that the sound has been placed in several chunks after each video frame, and that an index has been built at the end of the AVI for both the image frames and the audio samples. Once you're comfortable with the notion of index in this specific context (with this specific codec mixed with an MP3), then you can build your files by copying this pattern.
I create a simple direct show source filter using FFmpeg.I read rtp packets from RTSP source and give them to decoder. It works for h264 stream.
MyRtspSourceFilter[H264 Stream] ---> h264 Decoder --> Video Renderer
The bad news is that it does not work for MPEG-4. I can able to connect my rtsp source filter with MPEG-Decoder. I got no exception but video renderer does not show anything. Actually just show one frame then nothing [just stop]... Decoders and Renderers are 3rd party so i can not debug them.
MyRtspSourceFilter[MP4 Stream] ---> MPEG-4 Decoder --> Video Renderer
I can able to get rtp packets from MPEG-4 RTSP Source using FFmpeg sucessfully.There is no problem with it.
It seems that i have not set something(?) in my Rtsps Source
Filter which is not necessary for H264 stream but may be important for
MPEG-4 stream
What may cause this h264 stream and MPEG-4 stream difference in a direct show rtsp source filter? Any ideas.
More Info:
-- First i try some other rtsp source filters for MPEG-4 Stream...Although my rtsp source is same i see different subtypes in their pin connections.
-- Secondly i realy get suspicious if the source is really MPEG-4 SO i check with FFmpeg...FFmpeg gives the source codec id as "CODEC_ID_MPEG4".
Update:
[ Hack ]
I just set m_bmpInfo.biCompression = DWORD('xvid') it just worked fine...But it is static. How to dynamically get/determine this value using ffmpeg or other ways...
I am on the RTSP-server side, different use case with required by-frame conversions
MP4 file ---> MPEG-4 Decoder --> H264 Encoder --> RTSP Stream
Will deploy libav, which is kernel of ffmpeg.
EDIT:
With H264 encoded video layer, the video just needs to be remuxed from
length-prefixed file format "AVCC" to byte stream format according to some "Annex B" of the MPEG-4 specification. libav provides required bit-stream filter "h264_mp4toannexb"
MP4 file ---> h264_mp4toannexb_bsf --> RTSP Stream
Now, for decoding RTSP:
Video and Audio come in separate channels. Parsing and decoding the H264 stream is done here: my basic h264 decoder using libav
Audio is a different thing:
RTP Transport suggests, that AAC frames are encapsulated in ADTS, where RTSP players like VLC expect plane AAC and accordingly available RTSP server implementations AACSource::HandleFrame() pinch the ADTS header off.
Another different thing is "time stamps and RTP":
VLC does not support compensation of time offsets between audio and video. Nearly every RTSP producer or consumer has constraints or non-documented assumptions for a time offset; you might consider an additional delay pipe to compensate the offset of an RTSP source.