Format h264 detected only with low score of 1, misdetection possible - ffmpeg

there are a h264 rtp stream passing through server, I catch every rtp packets and extract every nalu by getting rid of rtp header, and then write every nalu to file record.h264 with adding a h264 prefix start code 0x00000001, but file record.h264 cannot be played by ffplay and cannot be parsed by ffprobe, where is wrong?
here is my record file:https://github.com/sshsu/record_h264_file

You might be skipping the decoding of the NAL header within the RtpPacket to it's full format from the aggregated form.
I have an implementation in C# here:
https://github.com/juliusfriedman/net7mma_core/blob/master/RtspServer/MediaTypes/RFC6184Media.cs if it helps you.

Related

How to remove PES header from video stream

I have captured video stream from the multimedia device which has H264 format.
I want to remove PES header (i.e. first 14 bytes of the frame). Is it possible to do it by ffmpeg or any other tool. I tried searching online but didn't find it. I thought of scripting it out and remove it but don't know frame format completely. I know, I have to learn all format stuffs and I am going through it but I need this immediately. Any suggestion/pointers ?
ffmpeg -i [file or url] -codec copy out.264

Changing time base on an ffmpeg encode (libffmpeg)

I'm using libffmpeg to decode an RTSP video stream and write it to a file.
The time_base reported by the codec when I open the stream is 1 / 180000. When I create my output AVStream, I'm copying this time_base over to the output. It works, but I get this message when I call avformat_write_header:
"WARNING codec timebase is very high. If duration is too long,file may not be playable by quicktime. Specify a shorter timebase"
I tried specifying a shorter timebase (say, 1/30) but when I do this, the video plays back at the wrong speed.
What's the right way to adjust the time_base on my output stream without modifying the playback time?
Thanks.

Stream H264 using live555 and FFmpeg

I am trying to stream data encoded using FFMPEg using live555. I have a custom framesource that sends the data to sink but I am unable to figure out how to set SPS and PPS in the framer. I understand that extradata contains this information but I saw only SPS in that. Does extradata changes while encoding by FFMPeg? If yes how and when we need to update this information in live555 framer.
Does anyone have a working sample using FFMpeg and live555 to stream H264
Live555 is simply a streaming tool, it does not do any encoding.
The SPS and PPS are NAL units within the encoded H264 stream (or the output from your FFMPEG implementation) (see some info here: http://www.cardinalpeak.com/blog/the-h-264-sequence-parameter-set/).
If you want to change the SPS or PPS information you need to do it in FFMPEG.
Examples of FFMPEG and Live555 working together to stream MPG2 and H264 streams are here:
https://github.com/alm865/FFMPEG-Live555-H264-H265-Streamer/
As for streaming a H264 stream, you need to break the output from FFMPEG into NAL units before you send if off to the discrete framer for it to work correctly. You must also strip the leading and trailing NAL bits from the packet (i.e. remove the NAL identifier 0x00 0x00 0x00 0x01).
Live555 will automatically read these in and update as necessary.

Container for a single h264 video stream

I have the code of a simple h264 encoder, which outputs a raw 264 file. I want to extend it to directly output the video in a playable container; it doesn't matter which one as long as it is playable by VLC. So, what is the easiest way to include a wrapper around this raw H264 file?
Everywhere I looked on the web, people used ffmpeg and libavformat, but I would prefer to have standalone code. I do not want fancy stuff like audio, subtiltes, chapters etc., just the video stream.
Thanks!
You can output a .264 directly by writing the Elementary stream to a file in AnnexB format. That is, write each NALU to the file separated by start codes (0x00000001). But make sure the stream writes SPS and PPS before the first IDR>
mkv, mpeg-ts, mp4 (you can use libMP4v2)

What is the minimum amount of metadata is needed to stream only video using libx264 to encode at the server and libffmpeg to decode at the client?

I want to stream video (no audio) from a server to a client. I will encode the video using libx264 and decode it with ffmpeg. I plan to use fixed settings (at the very least they will be known in advance by both the client and the server). I was wondering if I can avoid wrapping the compressed video in a container format (like mp4 or mkv).
Right now I am able to encode my frames using x264_encoder_encode. I get a compressed frame back, and I can do that for every frame. What extra information (if anything at all) do I need to send to the client so that ffmpeg can decode the compressed frames, and more importantly how can I obtain it with libx264. I assume I may need to generate NAL information (x264_nal_encode?). Having an idea of what is the minimum necessary to get the video across, and how to put the pieces together would be really helpful.
I found out that the minimum amount of information are the NAL units from each frame, this will give me a raw h264 stream. If I were to write this to a file, I could watchit using VLC if adding a .h264
I can also open such a file using ffmpeg, but if I want to stream it, then it makes more sense to use RTSP, and a good open source library for that is Live555: http://www.live555.com/liveMedia/
In their FAQ they mention how to send the output from your encoder to live555, and there is source for both a client and a server. I have yet to finish coding this, but it seems like a reasonable solution

Resources