Use ffmpeg to stream rawvideo from a USB camera - ffmpeg

I have a image sensor that streams 640x480 in RAW8 format. A USB controller is receiving this data, packing two pixels of 8-bits each and sending to USB as a 16-bits per pixel YUV422 format (this is because currently UVC does not support RAW8 format).
I was checking if I can use ffmpeg to receive the UVC stream and decode it as RAW8 video.
Has anyone tried this before?

Related

get the bit stream per frame from ffmpeg record

Is there any way to get the bitstream per frame captured by ffmpeg. I want to capture with the raspberry pi camera in real-time and get the bitstream of the frame encoded by H.264 like I frame and P frame from the bitstream of ffmpeg. Because I want to stream frame by frame from the camera node to the server. Thank you in advance.

Why is live video stream not fluent while audio stream is normal when they are played by Flash RTMP Player after being encoded

My video stream is encoded with H.264, and audio stream is encoded with AAC. In fact, I get these streams by reading a file whose format is flv. I only decode video stream in order to get all video frames, then I do something by using ffmpeg before encoding them, such as change some pixels. At last I will push the video and audio stream to Crtmpserver. When I pull the live stream from this server, I find the video is not fluent but audio is normal. But when I change gop_size from 12 to 3, everything is OK. What reasons cause that problem, can anyone explain something to me?
Either the CPU, or the bandwidth is not sufficient for your usage. RTMP will always process audio before video. If ffmpeg, or the network is not able to keep up with the live stream, Video frames will be dropped. Because audio is so much smaller, and cheaper to encode, a very slow CPU or congested network will usually have no problems keeping up.

How to compress output file using FFmpeg - Apple ProRes 422

I am new to video encoding and trying to encode a music video for the apple itunes video store.
I am currently using FFmpeg for encoding.
My source file is mp4 file type and file size=650MB
I encode the file using the Apple ProRes 422 (HQ) codec and output a mov file.
ffmpeg -y -i busy1.mp4 -vcodec prores -profile:v 3 -r "29.97" -c:a mp2 busy2.mov
I am trying to encode the video according to the following specs:
● Apple ProRes 422 (HQ)
● VBR expected at ~220 Mbps
Encoded PASP Converted to ProRes From
1920 x 1080 1:1 HDCAM SR, D5, ATSC
1280 x 720 1:1 ATSC progressive
29.97 interlaced frames per second for video sourced
Music Video Audio Source Profile
● MPEG-2 layer II stereo
● 384 kpbs
● 48Khz
The file is encoded perfectly fine however the output is 6Gb in size.
Why would the file be so large after encoding?
Am I doing something wrong here?
The Apple ProRes is not intended for high compression. It is an intermediate codec used in post-production which optimizes the storage as opposed to keeping the videos uncompressed while retaining a high image quality.
You are supposed to use your uncompressed source file as input to retain the maximum quality and not an already lossy-compressed video.
You only mentioned the container format of your input file: MP4 but not the codecs which is the actual important information.
Since the HQ flavor of ProRes uses 220 Mbps the file size can actually increase but you don't gain anything in quality if the source is lossy.
See more here: Apple ProRes
Though you don't gain much by decompressing a source clip thats "Lossy", you do gain in some ways. Compressed video uses a compressed color palette, which can be detrimental when making color corrections or corrections to detail level, especially when you're given interlaced footage to clean up. If you put in the time on detail, microcontrast, and color, you know the benefit of expanded color detail for compressing back down. It also encodes much faster on the back end of your edits. Simply compressing the data down is faster than expanding and then compressing.
However, if you recompress all your video down to the same size and codec as what went in, most encoders and editor apps now test the datarate of the GOP, working on only those GOP's that need to be redone to fit the new settings.

Windows 8 mjpeg video decoding capabilities

Since windows 7 build-in mjpeg decoder seems has resolution limitation, it could not decode the mjpeg which has resolution larger than 2592x1944 pixels. So I want to know the ability of windows 8 decoder for mjpeg, could it decode the mjpeg video which resolution larger than 2592x1944 pixels?
I found the build-in decoder could handle the high resolution of 2592x1944 pixels after I trying it on windows 8.

MJPEG streaming over RTSP

I am capturing JPEG images from an IP-camera over RTSP. I use live555 + libavcodec for streaming and decoding the MJPEG image. The stream works fine up to the image resolution 2048 x 1920. But when I increase the image width above 2048, I get a bar-shaped rectangular image of very small width (i.e., 544x1920). The image is correctly captured and saved on the camera. The problem occurs only when I stream the image over RTSP to the PC. Is there any payload restriction in RTP for high-resolution MJPEG?
Please read https://www.rfc-editor.org/rfc/rfc2435 at the bottom of page 4. There, it is written that the maximum width of an image is 2040. A workaround is made possible with the ONVIF standard.
More likely, either decoder incorrectly decodes the image, or RTP client incorrectly reconstructs it. 2048 pixels is not a limit for JPEG (obviously) and RTP client does not deal a lot with parsing the bitstream, so resolution is not so important there (also the process reaches decoded image and does not fail completely on the way!).

Resources