How do I create a stereoscopic OpenGL view on the Mac? - macos

Although Apple's documentation clearly states they support stereoscopic views on the Mac with the kCGLPFAStereo and NSOpenGLPFAStereo pixel format attributes, I've been unable to get any semblance of a stereoscopic pixel format object.
I've been able to create stereoscopic output recognized by 3DTV displays by manually specifying the format as Side-by-side, but it should be recognized automatically by the hardware over HDMI.
How does one create a stereoscopic display in Cocoa or Core OpenGL with NSOpenGLPFAStereo or kCGLPFAStereo?

Related

How to convert YUV(UYVY) to RGB using OpenGL ES

We have a custom camera driver for Arm Based SoC running embedded Linux.
I am using wayland APIs to show the camera frames on the display, for that I am using opencv based color conversion, which is taking too much CPU.(I can not display UYVY directly on the display because of some limitations, so color conversion is required)
cv::cvtColor(myuv, mrgb, cv::COLOR_YUV2BGRA_UYVY);
Now I want to use OpenGL ES APIs for color conversion to display the converted RGB frame on the display
and second task is to get the RGB buffer back to CPU for further processing the camera frame.
I am pretty new with OpenGL ES/EGL/Wayland, Please suggest any good example code(c/c++) based on OpenGL ES, EGL and wayland combination.

Better thumbnail creation of raw images

I'm building a web application (RoR) that manages images that are in raw image format. I need to create thumbnail/web versions of these images to be displayed on the site. Currently, I'm using imagemagick, which delegates to dcraw to produce the jpeg thumbnail. The problem I'm running into is that the thumbnail deviates from the look of the original; the image gets darker and the white balance is sometimes heavily shifted.
I'm assuming that the raw format default setting can't be read by dcraw, and thus it's left guessing how to parameterize the raw conversion. I can play around with customizing these setting, but it seems getting it right on one image causes others to be further off the mark.
Is there a better way to do this in order to get a result that more closely mimics the what I might see in a raw viewer like photoshop, or even Mac OSX preview? Given that Mac OS X supports a variety of digital camera raw formats, is there anyway to utilize the OS's ability to render preview images (especially considering that result is what is expected).
The raw images that I'm using are 3FRs and fffs (both from Hasselblad).
I can post samples if people are interested.
Thanks
Look at "sips" and "Resizing images using the command line" to get you started.

Where does directshow get image dimensions from?

We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.

Resize JPEG and save new file to JPEG on Mac OS X using Cocoa

I am a bit confused about what the best approach is to resize a JPEG file on disk and save the resized JPEG as a new file to disk (on Mac OS X with Cocoa). There are a number of threads about resizing, but I am wondering what approach to use. Do I need to use Core Graphics for this or is this framework "too much" for a simple operation as a resize? Any pointers are welcome as I am a bit lost.
Core Graphics isn't “too much”; it's the right way to do it.
There is a Cocoa solution:
Create an image of the desired size (the destination image).
Lock focus on it.
Draw the source image into it.
Unlock focus on it.
Export it to desired file format.
Write that data somewhere.
But that destroys metadata.
The Core Graphics solution is not a whole lot different:
Use an image source to load the image and its metadata.
Create a bitmap context of the desired size with the source image's color space. (The hard part here is making sure that the destination context matches the source image as closely as possible while still being in one of the supported pixel formats.)
Draw the source image into it.
Capture the contents of the context.
Use an image destination to write the image and metadata to a file.
And the Core Graphics solution ensures that as little information as possible is lost along the way. (You may want to adjust the DPI metadata, if present.)
Install and use ImageMagick.

Extracting an image from H.264 sample data (Objective-C / Mac OS X)

Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?

Resources