I am getting a list of small rentangle images with contain the parts of the image that have changed from the previous image. This results from the desktop image capture with directx11 which provides what parts of the desktop image have changed and the rectangles from them.
I am trying to figure out if I can pass them to ffmpeg libavcodecs encoder for h.264. I looked into AVFrame and didn't see a way to specify the actual parts that have changed from the previous image.
Is there a way to actually do this, when passing an image to the ffmpeg codecContext to encode it in the video, to just pass the changed parts from the previous frame? Maybe doing this will reduce the amount of CPU usage because this is for a live stream.
I use the standard avcodec_send_frame to send a frame to the codec for encoding, it only has an AVframe and a codec context as parameters.
Related
Have adapted FFmpeg sample muxing.c under Windows 7 to write MP4 files from video camera RGB data.
Using muxing.c default bit_rate=400000.
Am not setting global_quality.
Resultant MP4 is poor quality, highly pixelated.
Original raw images in video camera are sharp.
What values should I use for bit_rate? Do I have to also set rc_max_rate?
What values should I use for global_quality? Do I have to set any flags to enable use of global_quality?
Is bit_rate versus global_quality an either/or situation? Or can both be useful in adjusting quality?
Thanks for your time.
I am developing a H265 directshow decoder that will handle live stream. I am facing issue while rendering the live stream. It only shows one frame on the rendered active window. On the other the function I am using to fill the output buffer is continuously filling the output buffer.
For testing purposes I stored the output buffer in a file and then rendered it using some YUV player and that file dont have that issue. This means that the buffer is getting the frames but then why render show only one. Where can be the issue.
Thanks.
I have some images that were taken from a video via screen capture. I would like to know when in the video these images appear (timestamps). Is there a way to programmatically match an image with a specific frame in a video using ffmpeg or some other tool?
I am very open to different technologies as I'm eager to automate this. It would be extremely time consuming to do this manually.
You can get the psnr between that image and each frame in the video, and the match is the frame with the highest psnr. ffmpeg has a tool to calculate the psnr in tests/tiny_psnr which you can use to script this together, or there's also a psnr filter in the libavfilter module in ffmpeg if you prefer to code rather than script.
Scripting, you'd basically decode the video to a FIFO, decode the image to a file, and then match the FIFO frames repeatedly against the image file using tiny_psnr, selecting the framenumber for the frame with highest psnr. The output will be a frame-number, which (using fps output on the commandline) you can approximately convert to a timestamp.
Programming-wise, you'd decode the the video and image to AVFrame, use the psnr filter to compare the two, and then look at the output frame metadata to record the psnr value in your program, and search for the frame with the highest metadata psnr value, and for that frame, AVFrame->pkt_pts would be the timestamp.
I am right now using ffmpeg library to extract image from 1080i YUV 422 raw file. As i use interlaced data, it will drop some lines when i extracct image from one frame of video, Is it possible to merge two or three frame and make a single high definition image? Please guide me to move forward
Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?