I need to decode an H264 stream that comes from a live DVR camera.
To facilitate the example, I stored the RAW stream from the DVR camera in the following file (test.h264): http://f.zins.com.br/test.h264
To decode the live stream, I followed the following ffmpeg example: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/decode_video.c
If I open the .h264 test with VLC, the images look perfect.
If you decode the .h264 test with ffmpeg using avformat_open_input and avformat_find_stream_info, the images also look perfect.
But if I decode using the https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/decode_video.c example, the images are all distorted. I think this happens because along with the H264 stream can have audio together.
Enabling the debugging of ffmpeg, it shows a lot of the following errors:
[h264 # 092a9b00] Invalid NAL unit 0, skipping.
[h264 # 092a9b00] Invalid NAL unit 0, skipping.
[h264 # 092a9b00] Invalid NAL unit 0, skipping.
[h264 # 092a9b00] error while decoding MB 16 1, bytestream -28
[h264 # 092a9b00] Invalid NAL unit 8, skipping.
[h264 # 092a9b00] Invalid NAL unit 8, skipping.
[h264 # 092a9b00] Invalid NAL unit 8, skipping.
Is there a way for me to only filter the video and ignore the audio from a live stream?
Otherwise, is there any solution to decode the test.h264 using the decode_video.c example without distorting the frames?
The distorted frame sometimes looks like the image below, and sometimes it gets almost all gray.
The stream has mpeg wrapper around raw h264 packets, and we need to demux them first. If you cannot provide a URL with some protocol supported by ffmpeg (e.g. udp://) like, you should build custom AVIOContext for your live stream and pass it to
avformat_open_input(&fmt_ctx, NULL, NULL, NULL)
similar to this example.
Now you can start the usual demuxer loop with
av_read_frame(fmt_ctx, &pkt)
Related
I'm new to ffmpeg and trying to use it to record a streaming video (e.g. recording YouTube streams). However, I guess sometimes due to network issues, the last NAL unit is corrupted and when I want to concat multiple videos together, the error below occurs and the process exited.
[NULL # 0x558551957ec0] Invalid NAL unit size (41974 > 39166).bitrate=12309.0kbits/s speed=21.5x
[NULL # 0x558551957ec0] missing picture in access unit with size 39182
[concat # 0x55855194c700] h264_mp4toannexb filter failed to receive output packet
../filelist.txt: Invalid data found when processing input
So I'm wondering if there's a way to tell ffmpeg to skip the last NAL unit (or skip any invalid NAL unit) without re-encoding the entire video? Thanks in advance!
I am using ffmpeg to stream a h264 encoded avi file to a player and
the player supports only packetization mode 0 ( single NAL unit mode
). But ffmpeg always uses packetization mode 1 and sends FU-A nal unit
type, the player does not play the video on receiving a fu-a nal type
payload. It just displays a blank screen. I understand non-interleaved
mode supports both single NAL unit types (1-23) and fua, but how to
can I force ffmpeg to use only single nal unit type mode? Can some one
help me?
I'm assuming you mean H264 over RTP here. FFmpeg's RTP muxer can be forced to use mode 0 by using flag -rtpflags h264_mode0; though if you are seeing FU-A type (28) then chances are some NAL units can't fit single RTP packet and mode0 won't work.
I wrote a RTP server to receive the RTP packets which are sent by command ffmpeg -i test.mp4 rtp rtp://ip:port (client) and the server could get the nal type 24 (STAP-A).
And I want to use the server to retrieve the spa and pps from the first nal(type 24) instead of info from ffmpeg command.
Is it possible SPS and PPS would be aggregated in one nal ?
for example
[RTP header][nal header(type 24)][nal1 header][nal1 size][nal1 payload][nal2 header][nal2 size][nal2 payload]...
thanks
It's highly likely that the STAP-A consists of the SPS and PPS: these NAL units are usually at the beginning of the stream, small and can be aggregated into a STAP A. If the IDR is small enough, it might also be part of the STAP, but usually this is to big and will be sent separately.
The best thing to verify this is to split the STAP-A into the original NAL units (See RFC6184) and check for types 7 (SPS) and 8 (PPS).
I am trying to stream data encoded using FFMPEg using live555. I have a custom framesource that sends the data to sink but I am unable to figure out how to set SPS and PPS in the framer. I understand that extradata contains this information but I saw only SPS in that. Does extradata changes while encoding by FFMPeg? If yes how and when we need to update this information in live555 framer.
Does anyone have a working sample using FFMpeg and live555 to stream H264
Live555 is simply a streaming tool, it does not do any encoding.
The SPS and PPS are NAL units within the encoded H264 stream (or the output from your FFMPEG implementation) (see some info here: http://www.cardinalpeak.com/blog/the-h-264-sequence-parameter-set/).
If you want to change the SPS or PPS information you need to do it in FFMPEG.
Examples of FFMPEG and Live555 working together to stream MPG2 and H264 streams are here:
https://github.com/alm865/FFMPEG-Live555-H264-H265-Streamer/
As for streaming a H264 stream, you need to break the output from FFMPEG into NAL units before you send if off to the discrete framer for it to work correctly. You must also strip the leading and trailing NAL bits from the packet (i.e. remove the NAL identifier 0x00 0x00 0x00 0x01).
Live555 will automatically read these in and update as necessary.
I am getting the following errors when decoding H.264 frames received from the remote end of a H.264 based SIP video call. Appreciate any help in understanding there errors.
non-existing PPS 0 referenced
decode_slice_header error
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
decode_slice_header error
non-existing PPS 0 referenced
decode_slice_header error
no frame!
That just means that ffmpeg has not seen a keyframe yet, which carries SPS and PPS information. SPS and PPS are crucial in decoding an incoming frame/slice. Keyframes are sent periodically (i.e. every 5-10 seconds or more); so if it turns out that you joined a stream before the keyframe arrived; you will see this warning for every frame until a keyframe shows up.
As soon as the keyframe shows up from the wire, ffmpeg will have enough information to decode that frame (and any subsequent frames until the next keyframe), so those warnings will go away.
You need to add frames sps and pps information. ffmpeg needs these frames to decode. You can find these values in the SDP file.
In the SDP file, you should look for NAL units, you can see something like that:
z0IAHukCwS1xIADbugAzf5GdyGQl, aM4xUg
these values, base64 encoded, should be converted to binary. I am using wireshark and wireshark converts these values automatically for you. After that you have sps and pps values.
Now you have to add these NAL blocks before the data frame:
00 00 00 01 sps 00 00 00 01 pps 00 00 00 01 data
for h264 these block is what I have been using to decode.
To decode a frame or a slice, sliceHeader is decoded, which refers to a PPS or "Picture Parameter Set". It has information regarding the specifics of the frame like width, height etc.
I guess your data is coming through a streaming input channel, in which case SPS and PPS would have been sent earlier in the stream.
You may have to concatenate the same to your stream.