how to set video preview size of AVCaptureDevice? - macos

Working on a camera application with AVFoundation on Mac OS X 10.7
i used [session setSessionPreset] to set resolution of video size, but it only effects output movie file, my application need switch different preview resolution from low to high, some cameras that i am using output high resolution by default, like 2048x1536, it's not suitable for preview.
how can i get to this?

Related

Why does a video larger than 8176 x 4088 created using AVFoundation come out with a uniform dark green color on my Mac?

When I use AVFoundation to create an 8K (7680 x 4320) MP4 with frames directly drawn onto pixel buffers obtained from the pixel buffer pool, it works with kCVPixelFormatType_32ARGB.
However if I use kCVPixelFormatType_32BGRA, the entire video has a uniform dark green color instead of the actual contents. This problem occurs for resolutions above 8176 x 4088.
What could be causing this problem?
AVAssetWriter.h in SDK 10.15 and in SDK 11.3 says:
The H.264 encoder natively supports ... If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended on iOS and kCVPixelFormatType_32ARGB is recommended on OSX.
AVAssetWriter.h in SDK 12.3 however says:
The H.264 and HEVC encoders natively support ... If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended on iOS and macOS.
AVAssetWriter.h on all three SDKs however also says:
If you are working with high bit depth sources the following yuv pixel formats are recommended when encoding to ProRes: kCVPixelFormatType_4444AYpCbCr16, kCVPixelFormatType_422YpCbCr16, and kCVPixelFormatType_422YpCbCr10. When working in the RGB domain kCVPixelFormatType_64ARGB is recommended.
Whatever be the recommendations, the below prelude states that all of them are just for optimal performance and not for error free encoding!
For optimal performance the format of the pixel buffer should match one of the native formats supported by the selected video encoder. Below are some recommendations:
Now, Keynote movie export with H.264 compression also results in the same problem with the same size limits on my Mid-2012 15-inch Retina MacBook Pro running Catalina (supports upto Keynote 11.1). This problem doesn't occur on a later Mac running Monterey where the latest version 12.2 of Keynote is supported.
I have not included code because Keynote movie export is a simple means to reproduce and understand the problem.
My motivation for asking this question is to obtain clarity on:
What is the right pixel format to use for MP4 creation?
What are safe size limits under which MP4 creation will be problem free?

Get/set video resolution when capturing image

I'm capturing images from my webcam with some code that mainly bases on this: Using the Sample Grabber.
Here I only get the default resolution of 640x480 while the connected camera is able to show more (other capture applications show a bigger resolution).
So, how can I:
retrieve the list of available resolutions
set one of these resolutions so that the captured image comes with it?
IAMStreamConfig interface lists capabilities and lets you select resolution of interest. enumerating media types on an unconnected yet pin will list you specific media types (amd resolutions) the camera advertises as supported.
More on this (an links from there):
Video recording resolution using DirectShow
Video Capture output always in 320x240 despite changing resolution

How to set the alpha mode for a QTMedia / QTTrack using QTKit

I'm trying to add subtitles to an existing movie and everything seems to work as expected except for the background of the subtitles track that should be transparent.
MediaHandler media = GetMediaHandler([[subtitlesTrack media] quickTimeMedia]);
MediaSetGraphicsMode(media, graphicsModeStraightAlpha, NULL);
I have already tried the above code found here but I was not able to use the GetMediaHandler and MediaSetGraphicsMode functions. Maybe I'm missing some includes.
I would prefer doing it using only the QTKit framework if possible.
If you're using this code as an example in a 32-bit Mac application, to get alpha transparency to work normally, the second parameter to MediaSetGraphicsMode() must be graphicsModePreBlackAlpha.
If you use graphicsModeStraightAlpha, the video frames after compression into the QuickTime media won't have an alpha channel, at least under Mac OS 10.10.5.
You also need to be sure to use a video codec that supports alpha channels - not all of them do.

Does AVFoundation (OS X) support transparent playback of videos with alpha channel?

Is there a way of playing back video files that include an alpha channel using AVFoundation? I would like to add an AVPlayerLayer as a sublayer to my layer tree preserving transparency so it works like as an overlay.
Or is it alternatively possible to use a second greyscale video as an alpha mask for the main movie?
As far as I know, AVFoundation on OS X is only able to play h.264. If you to get it working with older Quicktime movies that contain an alpha channel then it might be possible. A short term fix would be to use a series of PNG images instead of a movie, but that would not involve AVFoundation. You could also read about my library that could be used for this kind of this at this anwser. The answer is talking about iOS, but the library could be used with either iOS or OS X.

Extracting an image from H.264 sample data (Objective-C / Mac OS X)

Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?

Resources