GLKView won't change pixel format - opengl-es

There is an issue of GLKView, I'm stuck here a lot. First, I create EAGLContext context and make it current:
EAGLContext* pOpenGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
if(!pOpenGLContext)
return nil;
if(![EAGLContext setCurrentContext:pOpenGLContext])
return nil;
Runs ok (I need version 3 so it sutes me)! Then I create GLKView, attached to previously created context:
GLKView* pOpenGLView = [[GLKView alloc] initWithFrame:Frame context:pOpenGLContext];
It's ok. But this code don't change anything at all :(
[pOpenGLView setDrawableColorFormat:GLKViewDrawableColorFormatRGBA8888];
[pOpenGLView setDrawableDepthFormat:GLKViewDrawableDepthFormat24];
[pOpenGLView setDrawableStencilFormat:GLKViewDrawableStencilFormatNone];
[pOpenGLView setDrawableMultisample:GLKViewDrawableMultisampleNone];
Then I do some final stuff:
pOpenGLView.delegate = self;
[pMainWindow addSubview:pOpenGLView];
...
However, after using GLKViewDrawableStencilFormatNone, I asks OpenGL for a depth and stencil formats... I get:
glGetIntegerv(GL_DEPTH_BITS, &OpenGLDepthBits); // = 32 (I need 24)
glGetIntegerv(GL_STENCIL_BITS, &OpenGLStencilBits); // = 8 (I need 0)
I need to turn stencil buffer off! I need to set 24-bit format depth buffer.
I have try to do like this also:
pOpenGLView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
pOpenGLView.drawableDepthFormat = GLKViewDrawableDepthFormat24;
pOpenGLView.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
How can I get it? What is wrong here? Thank you.

Related

Save NSColor components to file

I am trying to save an instance of NSColor to a file like this:
writeF(node.lineColour.hueComponent)
writeF(node.lineColour.saturationComponent)
writeF(node.lineColour.brightnessComponent)
writeF(node.lineColour.alphaComponent)
where the write function is:
func writeF(var val: CGFloat) -> Bool {
let nsd = NSData(bytes: &val, length: sizeof(CGFloat))
let rv = oStream!.write(UnsafePointer(nsd.bytes), maxLength: sizeof(CGFloat))
return rv > 0
}
And "node.lineColour" is just NSColor.blueColor(). It all compiles OK, but gives a run-time message at the first "writeF" line:
2015-10-01 07:57:43.871 canl[77917:8371660] An uncaught exception was raised
2015-10-01 07:57:43.871 canl[77917:8371660] *** -hueComponent not valid for the NSColor NSCalibratedWhiteColorSpace 0 1; need to first convert colorspace.
Apple documentation on color spaces is very esoteric (if you already understand it then it's a fine reference, but if not then... good luck). Why is the above code wrong? Should be able to at least retrieve the color components (CGFloat).
After swimming through the available documentation and trying different things, I found this to work:
let aColor = node.lineColor.colorUsingColorSpaceName(NSCalibratedRGBColorSpace)
if let culoare = aColor {
writeF(culoare.redComponent)
writeF(culoare.greenComponent)
writeF(culoare.blueComponent)
writeF(culoare.alphaComponent)
}
It also works for getting the hue, saturation, and brightness components, but I think I will go with RGB.

CIColorClamp not working correctly in OS X El Capitan

I am using Swift to do some video processing. After upgrading to El Capitan (and Swift 2) my code broke. I traced an error down to the CIFilter function CIColorClamp. This function is supposed to clamp the pixel values, but in fact messes up the image extent.
let _c:CGFloat = 0.05
let minComp = CIVector(x:_c, y:_c, z:_c, w: 1)
let maxComp = CIVector(x:1, y:1, z:1, w: 1)
let clamp: CIFilter = CIFilter(name: "CIColorClamp")!
print("clamp-in \(image.extent)")
clamp.setDefaults()
clamp.setValue(image, forKey: kCIInputImageKey)
clamp.setValue(minComp, forKey: "inputMinComponents")
clamp.setValue(maxComp, forKey: "inputMaxComponents")
print("clamp-out \(clamp.outputImage!.extent)")
The code above produces the output:
> clamp-in (6.0, 6.0, 1268.0, 708.0)
CoreAnimation: Warning! CAImageQueueSetOwner() is deprecated and does nothing. Please stop calling this method.
> clamp-out (-8.98846567431158e+307, -8.98846567431158e+307, 1.79769313486232e+308, 1.79769313486232e+308)
The fact that this call produces an internal warning does not instill confidence either!
Can anyone confirm this behavior? What am I doing wrong?
I also ran into this Problem. The extent was always set like this
-8.98846567431158e+307, -8.98846567431158e+307, 1.79769313486232e+308, 1.79769313486232e+308
but then I tried calling filter.debugDescription and recognized that in the extent in the sourceImage is given properly.
Here's my workaround. Because I use different filters, I ask if the filters name is 'CIColorClamp' and then I set the extent used in the CGImageRef to the values from the original image.
var extent = filteredImage.extent
if filter.name == "CIColorClamp" {
extent = sourceImage.extent
}
let cgImage:CGImageRef = context.createCGImage(filteredImage, fromRect: extent)
UIImageJPEGRepresentation(UIImage(CGImage: cgImage), 1.0).writeToFile(...)
Before that fix I always had an Crash cause the UIImageJPEGRepresentation could not be created, cause of the wrong extent values.
So it looks like that the extent is not transferred to the filtered image.
I had exactly the problem. I fixed it simply by cropping the returned image to the original image rect (Objective-C code):
if ([filter.name isEqualToString:#"CIColorClamp"]) {
image = [image imageByCroppingToRect:sourceImage.extent];
}

Screen Capture on OSX using MonoMac?

Can somebody help me with the following code snippet to capture part of or the whole desktop on OSX ? I would like to specify the upper-left corner coordinates (x,y) and the width (w) and height (h) of the rectangle that defines the capture.
It's for a C# MonoMac application on OSX.
This is what I've done:
int windowNumber = 2;
System.Drawing.RectangleF bounds = new RectangleF(0,146,320,157);
CGImage screenImage = MonoMac.CoreGraphics.CGImage.ScreenImage(windowNumber,bounds);
MonoMac.Foundation.NSData bitmapData = screenImage.DataProvider.CopyData();
It looks like I have the bitmap data in 'bitmapData', but I'm not sure how I convert the NSData instance 'bitmapData' to an actual Bitmap; i.e. :
Bitmap screenCapture = ????
The documentation is really sparse and I've googled for examples without luck. So I'm hoping that there's a kind MonoMac expert out there who can point me in the right direction? - An example would be nice :o)
Thank you in advance!
This will give you the bytes of your capture in a .NET byte[], from where you can create a Bitmap or Image or whatever you want. Might not be exactly what you are looking for but should put you in the right direction.
int windowNumber = 2; System.Drawing.RectangleF bounds = new RectangleF(0,146,320,157);
CGImage screenImage = MonoMac.CoreGraphics.CGImage.ScreenImage(windowNumber,bounds);
using(NSBitmapImageRep imageRep = new NSBitmapImageRep(screenImage))
{
NSDictionary properties = NSDictionary.FromObjectAndKey(new NSNumber(1.0), new NSString("NSImageCompressionFactor"));
using(NSData tiffData = imageRep.RepresentationUsingTypeProperties(NSBitmapImageFileType.Png, properties))
{
byte[] imageBytes;
using(var ms = new MemoryStream())
{
tiffData.AsStream().CopyTo(ms);
imageBytes = ms.ToArray();
}
}
}

Why does CMSampleBufferGetImageBuffer return NULL

I have built some code to process video files on OSX, frame by frame. The following is an extract from the code which builds OK, opens the file, locates the video track (only track) and starts reading CMSampleBuffers without problem. However each CMSampleBufferRef I obtain returns NULL when I try to extract the pixel buffer frame. There's no indication in iOS documentation as to why I could expect a NULL return value or how I could expect to fix the issue. It happens with all the videos on which I've tested it, regardless of capture source or CODEC.
Any help greatly appreciated.
NSString *assetInPath = #"/Users/Dave/Movies/movie.mp4";
NSURL *assetInUrl = [NSURL fileURLWithPath:assetInPath];
AVAsset *assetIn = [AVAsset assetWithURL:assetInUrl];
NSError *error;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:assetIn error:&error];
AVAssetTrack *track = [assetIn.tracks objectAtIndex:0];
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:track
outputSettings:nil];
[assetReader addOutput:assetReaderOutput];
// Start reading
[assetReader startReading];
CMSampleBufferRef sampleBuffer;
do {
sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
/**
** At this point, sampleBuffer is non-null, has all appropriate attributes to indicate that
** it's a video frame, 320x240 or whatever and looks perfectly fine. But the next
** line always returns NULL without logging any obvious error message
**/
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if( pixelBuffer != NULL ) {
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
...
other processing removed here for clarity
}
} while( ... );
To be clear, I've stripped all error checking code but no problems were being indicated in that code. i.e. The AVAssetReader is reading, CMSampleBufferRef looks fine etc.
You haven't specified any outputSettings when creating your AVAssetReaderTrackOutput. I've run into your issue when specifying "nil" in order to receive the video track's original pixel format when calling copyNextSampleBuffer. In my app I wanted to ensure no conversion was happening when calling copyNextSampleBuffer for the sake of performance, if this isn't a big concern for you, specify a pixel format in the output settings.
The following are Apple's recommend pixel formats based on the hardware capabilities:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
Because you haven't supplied any outputSettings you're forced to use the raw data contained within in the frame.
You have to get the block buffer from the sample buffer using CMSampleBufferGetDataBuffer(sampleBuffer), after you have that you need to get the actual location of the block buffer using
size_t blockBufferLength;
char *blockBufferPointer;
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &blockBufferLength, &blockBufferPointer);
Look at *blockBufferPointer and decode the bytes using the frame header information for your required codec.
FWIW: Here is what official docs say for the return value of CMSampleBufferGetImageBuffer:
"Result is a CVImageBuffer of media data. The result will be NULL if the CMSampleBuffer does not contain a CVImageBuffer, or if the CMSampleBuffer contains a CMBlockBuffer, or if there is some other error."
Also note that the caller does not own the returned dataBuffer from CMSampleBufferGetImageBuffer, and must retain it explicitly if the caller needs to maintain a reference to it.
Hopefully this info helps.

How to use Texture with a UImage as label in WhirlyGlobe

I add WhirlyGlobe to my project, it works well. Now I need to add image as a mark, I use the code like this. But it seems not work well.
The label show a white area on the top of first character.
And the console log 'Texture::createInGL() glGenTextures()'.
What can I do to solve it.
Texture *theTex = new Texture(#"icon", #"png");
theTex->setUsesMipmaps(true);
SimpleIdentity theTexId = theTex->getId();
theScene->addChangeRequest(new AddTextureReq(theTex));
SingleLabel *gzLabel = [[[SingleLabel alloc] init] autorelease];
gzLabel.text = #"XXXXXX";
gzLabel.iconTexture = theTexId;
[gzLabel setLoc:GeoCoord::CoordFromDegrees(113.2759952545166, 23.117055306224895)];
[labels addObject:gzLabel];
Is the texture a power of two along each side?

Resources