I'm capturing images from my webcam with some code that mainly bases on this: Using the Sample Grabber.
Here I only get the default resolution of 640x480 while the connected camera is able to show more (other capture applications show a bigger resolution).
So, how can I:
retrieve the list of available resolutions
set one of these resolutions so that the captured image comes with it?
IAMStreamConfig interface lists capabilities and lets you select resolution of interest. enumerating media types on an unconnected yet pin will list you specific media types (amd resolutions) the camera advertises as supported.
More on this (an links from there):
Video recording resolution using DirectShow
Video Capture output always in 320x240 despite changing resolution
Related
I am studying on the source identification of video files especially about those from smartphones.
I got to know that the values in avcC box in .mp4 video files have the encoding options(h.264) which decoder must know when processing the encoded stream.
And I guess most of the smartphone uses the customized FFmpeg to encode the raw stream. I want to know if the values in the avcC box are affected only by the version of FFmpeg(if not customized version is used).
I didn't delve into this but think that the libavcodec.so in FFmpeg fill the values in avcC box when doing encoding(is this right?).
So what I want to ask is if two different smartphones use the same libavcodec.so(even in the case whether other .so files, .apk file used for the recording, etc are different) and two video files which have the same resolution were filmed from each smartphone, do the values in avcC box the same?
I think this question may equal to "are the values in avcC box affected by other FFmpeg library or other layers in overall Android framework"?
++ there is one more question! Is there any case that two videos which have same resolution from the same smartphone have different values in avcC box? (I suggest the the difference of encoding option originating from low-battery mode, execution conditions of other apps, etc and if any core developer customize FFmpeg for that.)
It would be a great help if anyone let me know the answer~!
the avcC box contains the out of band extradata for the AVC stream. This stores way more than just resolution, such as profile, level, entropy encoding mode, color space information, etc. This is a standard, ffmpeg just implements that standard. iPhones for example produce perfectly valid mp4 file and do not use libav* / ffmpeg. See exactly what is is the avcC box here Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
I'm using Media Foundation and the IMFSampleGrabberSinkCallback to playback video files and render them to a texture. I am able to get video samples in the IMFSampleGrabberSinkCallback::OnProcessSample method, but those samples are compressed. I have way less samples than I have pixels in my render target. According to this, the media session should load any decoder that is needed (if available), but that does not seem to be the case. Even if I create the decoder and add it to the topology myself, the video samples are still compressed. Is there anything in particular I am missing here ?
Thanks.
I have various 1080P QuickTimes files using H.264 and MPEG-4 codecs created via QuickTime and Handbrake. They don't seem to have the NCLC atom. I want to know which transfer function to use to generate RGB video.
Under the Finder inspector, some of the files are HD(1-1-1), others don't have any info. In TN2227 Table 1 shows that HD video should use ITU-R709 and SD video ITU-R601. How can I find out if QuickTime decides it's 1-1-1 or considers the file HD ?
Is there a function to find out or do I have to use something like if((number_of_rows > 576) and (aspect > 3:2)) then HD = true ?
According to TN2227,
Important: Media without a ‘nclc’ tag will be color managed by QuickTime X as if it were created in the SMPTE-C color space.
We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.
Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?