MJPEG streaming over RTSP - ffmpeg

I am capturing JPEG images from an IP-camera over RTSP. I use live555 + libavcodec for streaming and decoding the MJPEG image. The stream works fine up to the image resolution 2048 x 1920. But when I increase the image width above 2048, I get a bar-shaped rectangular image of very small width (i.e., 544x1920). The image is correctly captured and saved on the camera. The problem occurs only when I stream the image over RTSP to the PC. Is there any payload restriction in RTP for high-resolution MJPEG?

Please read https://www.rfc-editor.org/rfc/rfc2435 at the bottom of page 4. There, it is written that the maximum width of an image is 2040. A workaround is made possible with the ONVIF standard.

More likely, either decoder incorrectly decodes the image, or RTP client incorrectly reconstructs it. 2048 pixels is not a limit for JPEG (obviously) and RTP client does not deal a lot with parsing the bitstream, so resolution is not so important there (also the process reaches decoded image and does not fail completely on the way!).

Related

Decoding HECV with FFMpeg. Multiple slices I frame issue

I'm facing issue decoding camera with HEVC codec and RTSP transport (live555). I'm trying to decode frames with ffmpeg (avcodec_send_packet/avcodec_receive_frame) but it decodes only the first 1/3 of the picture and the others 2/3 stay green rectangle. Each frame is sent divided into 3 parts (slices) :
I-Frame is I-B-I
P-Frame is P-P-P
I suppose FFmpeg is able to deal with this because it agrees with HEVC specification.
Have I to "concatenate" 3 slices before send them to ffmpeg ?
Could you help me please?
I try to send all slices before receive them but this doesn't work.
FFmpeg's H264 decoder indeed needs full frames as input. You can't send it individual slices. You can concatenate them yourself, or use a bitstream filter / parser which will do it for you. In this case, manual concatenation will probably work fine.

Use ffmpeg to stream rawvideo from a USB camera

I have a image sensor that streams 640x480 in RAW8 format. A USB controller is receiving this data, packing two pixels of 8-bits each and sending to USB as a 16-bits per pixel YUV422 format (this is because currently UVC does not support RAW8 format).
I was checking if I can use ffmpeg to receive the UVC stream and decode it as RAW8 video.
Has anyone tried this before?

ffmpeg decode h.264 stream latency one frame always

Now I use the x264 library to compress the video (from camera) and transmit to client side by TCP. In the client side, use the ffmpeg library to decode the stream on the Win32. But I find the stream decoding always latency one frame. That is to say, if the client side received A,B,C three frame, when decode the A frame, can't get the image. Then decode the B frame, get the A frame image.
For the h.264 encode, i have set zerolatency - ultrafast - baseline. So I think there is no B frame.
For the ffmpeg decoder, i have tried to set the thread_type = 0 to disable frame buffering decoding. But no efficient! By the way, after decoded one frame, passing NULL to the decoder to flush the decoder can help this case. But i think this is not a good solution.
So how to set the ffmpeg library to avoid the one frame latency?
If you are using av_parser_parse2, then there's a good chance that's where your one frame of latency is coming from. If you post your code on the decode side, people would probably be more able to help.

Why is live video stream not fluent while audio stream is normal when they are played by Flash RTMP Player after being encoded

My video stream is encoded with H.264, and audio stream is encoded with AAC. In fact, I get these streams by reading a file whose format is flv. I only decode video stream in order to get all video frames, then I do something by using ffmpeg before encoding them, such as change some pixels. At last I will push the video and audio stream to Crtmpserver. When I pull the live stream from this server, I find the video is not fluent but audio is normal. But when I change gop_size from 12 to 3, everything is OK. What reasons cause that problem, can anyone explain something to me?
Either the CPU, or the bandwidth is not sufficient for your usage. RTMP will always process audio before video. If ffmpeg, or the network is not able to keep up with the live stream, Video frames will be dropped. Because audio is so much smaller, and cheaper to encode, a very slow CPU or congested network will usually have no problems keeping up.

Can the frame size be cropped during decoding using libavcodec?

I've followed Dranger's tutorial for displaying video using libav and FFMPEG. http://dranger.com/ffmpeg/
avcodec_decode_video2 seems to be the slowest part of the video decoding process. I will occasionally have two videos decoding simultaneously but only displaying half of each video side by side. In other words, half of each video will be off-screen. In order to speed up decoding, is there a way to only decode a portion of a frame?
No.
Codecs using interframe prediction need whole reference frames, so there's no way this could possibly work.

Resources