Now I use the x264 library to compress the video (from camera) and transmit to client side by TCP. In the client side, use the ffmpeg library to decode the stream on the Win32. But I find the stream decoding always latency one frame. That is to say, if the client side received A,B,C three frame, when decode the A frame, can't get the image. Then decode the B frame, get the A frame image.
For the h.264 encode, i have set zerolatency - ultrafast - baseline. So I think there is no B frame.
For the ffmpeg decoder, i have tried to set the thread_type = 0 to disable frame buffering decoding. But no efficient! By the way, after decoded one frame, passing NULL to the decoder to flush the decoder can help this case. But i think this is not a good solution.
So how to set the ffmpeg library to avoid the one frame latency?
If you are using av_parser_parse2, then there's a good chance that's where your one frame of latency is coming from. If you post your code on the decode side, people would probably be more able to help.
Related
I'm facing issue decoding camera with HEVC codec and RTSP transport (live555). I'm trying to decode frames with ffmpeg (avcodec_send_packet/avcodec_receive_frame) but it decodes only the first 1/3 of the picture and the others 2/3 stay green rectangle. Each frame is sent divided into 3 parts (slices) :
I-Frame is I-B-I
P-Frame is P-P-P
I suppose FFmpeg is able to deal with this because it agrees with HEVC specification.
Have I to "concatenate" 3 slices before send them to ffmpeg ?
Could you help me please?
I try to send all slices before receive them but this doesn't work.
FFmpeg's H264 decoder indeed needs full frames as input. You can't send it individual slices. You can concatenate them yourself, or use a bitstream filter / parser which will do it for you. In this case, manual concatenation will probably work fine.
I tried to follow the following example: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/muxing.c
Problem: my stream h264 is not possible to do demux, so the frames I send to the encoder have some blank data, example pkt.pts == AV_NOPTS_VALUE, this causes an error when calling the av_interleaved_write_frame (mux) function.
Considering that the framerate is not constant, how do I generate the pkt.pts correctly from the video frames as I get it from the raw live stream?
Is there any way for ffmpeg libav to automatically calculate pkt.pts, pkt.dts timestamps as I send frames to the muxer with av_interleaved_write_frame?
Quite an old question, but it's still worth answering, since FFMPEG doesn't make it easy.
Consequent frames' PTS and DTS (in generic case they would be the same) shall be equal to previousPacketPTS + curtrentPacket.duration. Your curtrentPacket.duration is just what it sounds - information of how long given frame would be displayed before switching to the next one. Remember that this duration is in stream's time base units, which is rational of a second (for example 1/50 time base means the shortest frame of that stream can last 1/50 sec, or 20 ms). So you can translate time difference between two video frames into video frame duration, ie. when you receive a video frame, then it's duration would be the time needed for the next frame to come - again, in stream's time base. And that's all you need for calculating PTS and DTS for the frames.
My video stream is encoded with H.264, and audio stream is encoded with AAC. In fact, I get these streams by reading a file whose format is flv. I only decode video stream in order to get all video frames, then I do something by using ffmpeg before encoding them, such as change some pixels. At last I will push the video and audio stream to Crtmpserver. When I pull the live stream from this server, I find the video is not fluent but audio is normal. But when I change gop_size from 12 to 3, everything is OK. What reasons cause that problem, can anyone explain something to me?
Either the CPU, or the bandwidth is not sufficient for your usage. RTMP will always process audio before video. If ffmpeg, or the network is not able to keep up with the live stream, Video frames will be dropped. Because audio is so much smaller, and cheaper to encode, a very slow CPU or congested network will usually have no problems keeping up.
I've followed Dranger's tutorial for displaying video using libav and FFMPEG. http://dranger.com/ffmpeg/
avcodec_decode_video2 seems to be the slowest part of the video decoding process. I will occasionally have two videos decoding simultaneously but only displaying half of each video side by side. In other words, half of each video will be off-screen. In order to speed up decoding, is there a way to only decode a portion of a frame?
No.
Codecs using interframe prediction need whole reference frames, so there's no way this could possibly work.
I have a bunch of .png frames and a .mp3 audio file which I would like to convert into a video. Unfortunately, the frames do not correspond to a constant frame rate. For instance, one frame may need to be displayed for 1 second, whereas another may need to be displayed for 3 seconds.
Is there any open-source software (something like ffmpeg) which would help me accomplish this? Any feedback would be greatly appreciated.
Many thanks!
This is not an elegant solution, but it will do the trick: duplicate frames as necessary so that you end up with some resulting (fairly high) constant framerate, 30 or 60 fps (or higher if you need higher time resolution). You simply change which frame is duplicated at the closest new frame to the exact timestamp you want. Frames which are exact duplicates will be encoded to a tiny size (a few bytes) with any decent codec, so this is fairly compact. Then just encode with ffmpeg as usual.
If you have a whole lot of these and need to do it the "right" way: you can indicate the timing either in the container (such as mp4, mkv, etc) or in the codec. For example in an H.264 stream you will have to insert SEI messages of type pic_timing to specify the timing of each frame. Alternately you will have to write your own muxer relying on a container library such as Matroska (mkv) or GPAC (mp4) to indicate the timing in the container. Note that not all codecs/containers support arbitrarily variable frame rate. Only a few codecs support timing in the codec. Also, if timing is specified in both container and codec, the container timing is used (but if you are muxing a stream into a container, the muxer should pick up the individual frame timestamps from the codec).