Directshow YUV Rendering shows only one frame - render

I am developing a H265 directshow decoder that will handle live stream. I am facing issue while rendering the live stream. It only shows one frame on the rendered active window. On the other the function I am using to fill the output buffer is continuously filling the output buffer.
For testing purposes I stored the output buffer in a file and then rendered it using some YUV player and that file dont have that issue. This means that the buffer is getting the frames but then why render show only one. Where can be the issue.
Thanks.

Related

Pass image rectangle with the only changes to ffmpeg libavcodec encoder

I am getting a list of small rentangle images with contain the parts of the image that have changed from the previous image. This results from the desktop image capture with directx11 which provides what parts of the desktop image have changed and the rectangles from them.
I am trying to figure out if I can pass them to ffmpeg libavcodecs encoder for h.264. I looked into AVFrame and didn't see a way to specify the actual parts that have changed from the previous image.
Is there a way to actually do this, when passing an image to the ffmpeg codecContext to encode it in the video, to just pass the changed parts from the previous frame? Maybe doing this will reduce the amount of CPU usage because this is for a live stream.
I use the standard avcodec_send_frame to send a frame to the codec for encoding, it only has an AVframe and a codec context as parameters.

Video was encoded with a new width + height along with the old one. Can I re-encode with just the old dimensions using ffmpeg?

I've got a video out of OBS that play's normally on my system if I open it with VLC for example, but when I import it into my editor (Adobe Premiere) it gets weirdly cropped down. When inspecting the data for the video it's because for some reason the video gets encoded with a new width and height over top of the old one! Is there a way using ffmpeg to re-encode/transcode the video to a new file with only the original width and height?
Bonus question: would there be a way for me to extract the audio channels from my video as separate .mp3s? There are 4 audio channels on the video
Every time you reencode a video you will lose quality. Scaling the video up will not reintroduce details that were lost when it was scaled down.

Media Foundation video decoding

I'm using Media Foundation and the IMFSampleGrabberSinkCallback to playback video files and render them to a texture. I am able to get video samples in the IMFSampleGrabberSinkCallback::OnProcessSample method, but those samples are compressed. I have way less samples than I have pixels in my render target. According to this, the media session should load any decoder that is needed (if available), but that does not seem to be the case. Even if I create the decoder and add it to the topology myself, the video samples are still compressed. Is there anything in particular I am missing here ?
Thanks.

Extracting an image from H.264 sample data (Objective-C / Mac OS X)

Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?

Drawing video with text on top

I am working on an application and I have a problem I just cant seem to find a solution for. The application is written in vc++. What I need to do is display a YUV video feed with text on top of it.
Right now it works correctly by drawing the text in the OnPaint method using GDI and the video on a DirectDraw overlay. I need to get rid of the overlay because it causes to many problems. It wont work on some video cards, vista, 7, etc.
I cant figure out a way to complete the same thing in a more compatible way. I can draw the video using DirectDraw with a back buffer and copy it to the primary buffer just fine. The issue here is that the text being drawn in GDI flashes because of the amount of times the video is refreshed. I would really like to keep the code to draw the text intact if possible since it works well.
Is there a way to draw the text directly to a DirectDraw buffer or memory buffer or something and then blt it to the back buffer? Should I be looking at another method all together? The two important OS's are XP and 7. If anyone has any ideas just let me know and I will test them out. Thanks.
Try to look into DirectShow and the Ticker sample on microsoft.com:
DirectShow Ticker sample
This sample uses the Video Mixing Renderer to blend video and text. It uses the IVMRMixerBitmap9 interface to blend text onto the bottom portion of the video window.
DirectShow is for building filter graphs for playing back audio or video streams an adding different filters for different effects and manipulation of video and audio samples.
Instead of using the Video Mixing Renderer of DirectShow, you can also use the ISampleGrabber interface. The advantage is, that it is a filter which can be used with other renderers as well, for example when not showing the video on the screen but streaming it over network or dumping it to a file.

Resources