Reverted PNG Textures with Metal, when compiled with Macos SDK 10.14 - macos

I have a very strange problem, which I did not find mentioned anywhere. My company develops plugins for various hosts. Right now we are trying to move our OpenGL code to Metal. I tried with some of hosts (Like Logic and Cubase), and, it worked. Here is the example:
However, recently, new versions of those apps became available, compiled with 10.14 MacOS SDK, and here is what I started to get:
So, we have 2 problems: Color and Flipped textures. I found a solution for color (see code below), but I have absolutely no idea how to solve textures problem! I can, of course, flip the textures, but then on the previous app versions, they will become corrupt.
I believe that something has change in PNG loading, since, if you look carefully - text textures, that are generated on the fly, look the same in both occasions.
Here is my code:
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
while imageData is a memory where PNG is placed. I also tried this approach:
CGDataProvider* imageData = CGDataProviderCreateWithData(nullptr, imageBuffer, imageBufferSize, nullptr);
CGImage* loadedImage = CGImageCreateWithPNGDataProvider(imageData, nullptr, false, kCGRenderingIntentDefault);
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithCGImage:loadedImage options:0 error:&error];
And got EXACTLY the same result.
The issue is happening with all applications built with the latest, 10.14 SDK on 10.14 OS. Does anyone have a clue what causes it, or at least, give me a way to understand what SDK I was compiled with?

MTKTextureLoaderOptionOrigin a key used to specify when to flip the pixel coordinates of the texture.
If you omit this option, the texture loader doesn’t flip loaded textures.
This option cannot be used with block-compressed texture formats, and can be used only with 2D, 2D array, and cube map textures. Each mipmap level and slice of a texture are flipped.
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE, MTKTextureLoaderOptionOrigin : #TRUE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];

Related

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.

SDL OpenGL SDL_image Mac: Display Output (shifted colors)

We load textures through SDL_image, then we load them into OpenGL through textimage2d:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h,
0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, (GLuint**)texture->pixels );
On my windows machine, that runs fine, but on my friends Mac machine the colors seem to be shifted around. He got a strong blueish texture on his display. Of course, that has to deal with the internalFormat (here GL_BGRA_EXT). We tried all we found and running (compiling correctly) but none gives a correct output for mac. Any ideas how to get an idea about how mac computes the pixels array provided by SDL_image?
I haven't started using SDL openGL yet but here are some potential keywords that might be relevant.
RGBA/BGRA?
Colors are off in SDL program
http://www.opengl.org/wiki/Common_Mistakes
#if for TARGET_CPU_PPC and using a consistent value like GL_RGBA for
everything.
Hope this can get you started.
Okay thanks to your links, some investigation and lost brain cells, we came to the result that we can detect the order of pixel data trough the defined masks in the sdl surface (surface->format->Rmask) to decide if we use GL_UNSIGNED_INT_8_8_8_8 or GL_UNSIGNED_INT_8_8_8_8_REV.

Core Video pixel buffers as GL_TEXTURE_2D

So I've setup CVPixelBuffer's and tied them to OpenGL FBOs successfully on iOS. But now trying to do the same on OSX has me snagged.
The textures from CVOpenGLTextureCacheCreateTextureFromImage return as GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D targets.
I've found the kCVOpenGLBufferTarget key, but it seems like it is supposed to be used with CVOpenGLBufferCreate not CVPixelBufferCreate.
Is it even possible to get GL_TEXTURE_2D targeted textures on OSX with CVPixelBufferCreate, and if so how?
FWIW a listing of the CV PBO setup:
NSDictionary *bufferAttributes = #{ (__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA), (__bridge NSString *)kCVPixelBufferWidthKey : #(size.width), (__bridge NSString *)kCVPixelBufferHeightKey : #(size.height), (__bridge NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{ } };
if (pool)
{
error = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &renderTarget);
}
else
{
error = CVPixelBufferCreate(kCFAllocatorDefault, (NSUInteger)size.width, (NSUInteger)size.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)bufferAttributes, &renderTarget);
}
ZAssert(!error, #"Couldn't create pixel buffer");
error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, [[NSOpenGLContext context] CGLContextObj], [[NSOpenGLContext format] CGLPixelFormatObj], NULL, &textureCache);
ZAssert(!error, #"Could not create texture cache.");
error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, renderTarget, NULL, &renderTexture);
ZAssert(!error, #"Couldn't create a texture from cache.");
GLuint reference = CVOpenGLTextureGetName(renderTexture);
GLenum target = CVOpenGLTextureGetTarget(renderTexture);
UPDATE: I've been able to successfully use the resulting GL_TEXTURE_RECTANGLE textures. However, this will cause a lot of problems with the shaders for compatibility between iOS and OSX. And anyway I'd rather continue to use normalised texture coordinates.
If it isn't possible to get GL_TEXTURE_2D textures directly from a CVPixelBuffer in this manner, would it be possible to create a CVOpenGLBuffer and have a CVPixelBuffer attached to it to pull the pixel data?
Just came across this, and I'm going to answer it even though it's old, in case others encounter it.
iOS uses OpenGL ES (originally 2.0, then 3.0). OS X uses regular old (non-ES) OpenGL, with a choice of Core profile (3.0+ only) or Compatibility profile (up to 3.2).
The difference here is that OpenGL (non-ES) was designed a long time ago, when there were many restrictions on texture sizes. As cards lifted those restrictions, extensions were added, including GL_TEXTURE_RECTANGLE. Now it's no big deal for any GPU to support any size texture, but for API compatibility reasons they can't really fix OpenGL. Since OpenGL ES is technically a parallel, but separate, API, which was designed much more recently, they were able to correct the problem from the beginning (i.e. they never had to worry about breaking old stuff). So for OpenGL ES they never defined a GL_TEXTURE_RECTANGLE, they just defined that GL_TEXTURE_2D has no size restrictions.
Short answer - OS X uses Desktop OpenGL, which for legacy compatibility reasons still treats rectangle textures separately, while iOS uses OpenGL ES, which places no size restrictions on GL_TEXTURE_2D, and so never offered a GL_TEXTURE_RECTANGLE at all. Thus, on OS X, CoreVideo produces GL_TEXTURE_RECTANGLE objects, because GL_TEXTURE_2D would waste a lot of memory, while on iOS, it produces GL_TEXTURE_2D objects because GL_TEXTURE_RECTANGLE doesn't exist, nor is it necessary.
It's an unfortunate incompatibility between OpenGL and OpenGL ES, but it is what it is and there's nothing to be done but code around it. Or, now, you can (and probably should consider) moving on to Metal.
As this appears to have been left dangling and is something I recently dealt with: no, GL_TEXTURE_RECTANGLE appears to be the only use case. To get to a GL_TEXTURE_2D you're going to have to render to texture.
FWIW, As of 2023, the modern CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture. Just ensure your CVPixelbuffers are IOSurface backed by including (id)kCVPixelBufferIOSurfacePropertiesKey: #{}, in the attributes to [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes]
You will still be getting a GL_TEXTURE_RECTANGLE, but you'll be getting it way faster. I made a little shader so that I can render the GL_TEXTURE_RECTANGLE into a GL_TEXTURE_2D bound to a frame buffer.

Displaying full-sized camera raw files in OSX

This has been driving me mad for months: I have a little app to preview camera raw images. As the files in question can be quite big and stored on a slow network drive I wanted to offer the user a chance to stop the loading of the image.
Handily I found this thread:
Cancel NSData initWithContentsOfURL in NSOperation
and am using Nick's great convenience method to cache the data and be able to issue a cancel request halfway through.
Anyway, once I have the data I use:
NSImage *sourceImage = [[NSImage alloc]initWithData:data];
The problem comes when looking at Nikon .NEF files; sourceImage returns only a thumbnail and not the full size. Displaying Canon .CR2 files and in fact, any other .TIFF's and .JPEG's seems fine and sourceImage is the expected size. I've checked the amount of data that is being loaded (with NSLog and [data length]) and it does seem that all of the Nikon files' 12mb is there for the -initWithData:
If I use
NSImage *sourceImage = [[NSImage alloc]initWithContentsOfURL:myNEFURL];
then I get the full sized image of the Nikon files but of course the app blocks.
So after poking around for what is beginning to feel like my entire life I think I know that the problem is related to the Nikon's metadata stating that the file's DPI is 300 whereas Canon et al is 72.
I hoped a solution would be to lazily access the file with:
NSImage*tempImg = [[NSImage alloc] initByReferencingURL:myNEFURL];
and having seen similar postings here and elsewhere I found a common possible answer of simply
[sourceImage setSize:tempImg.size];
but of course this just resizes the tiny thumbnail up to 3000x2000 or thereabouts.
I've been messing with the following hoping that they would provide a way to get the big picture from the .NEF:
CGImageSourceRef isr = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CGImageRef isrRef = CGImageSourceCreateImageAtIndex(isr, 0, NULL);
and
NSBitmapImageRep *bitMapIR = [[NSBitmapImageRep alloc] initWithData:data];
But checking the sizes on these show similar thumbnail widths and heights. In fact, isrRef returns an even smaller thumbnail, one that is 4.2 times smaller. Perhaps worth noting that 300 / 72 == 4.2, so isrRef is taking account of the DPI on an image where the DPI (possibly) has already been observed.
Please! Can someone [nicely] put me out of my misery and help me get the full-sized image from the loaded data?!?! Currently, I'm special-case'ing the NEF files with a case insensitive search on the file extension and then loading the URL with the blocking methods. I have to take a hit on the app blocking and the search can't be fool-proof in the long run.
As an aside: is this actually a bug in the OS? It does seem like NSImage's -initWithData: and -initWithContentsOfURL: methods use different engines to actually render the image. Would it not be reasonable to have assumed that -initWithURL: simply loads the data which then gets rendered just as though it had been presented to the class with -initWithData: ?
It's a bug - confirmed when I did a DTS. Apparently I need to file a bug report. Currently the only way is to use the NSURL methods. Instead of checking the file extension I should probably traverse the meta dictionaries and check the manufacturer's entry for "Nikon", though...

Trying to turn [NSImage imageNamed:NSImageNameUser] into NSData

If I create an NSImage via something like:
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
it only has one representation, a NSCoreUIImageRep which seems to be a private class.
I'd like to archive this image as an NSData but if I ask for the TIFFRepresentation I get a
small icon when the real NSImage I originally created seemed to be vector and would scale up to fill my image views nicely.
I was kinda hoping images made this way would have a NSPDFImageRep I could use.
Any ideas how can I get an NSData (pref the vector version or at worse a large scale bitmap version) of this NSImage?
UPDATE
Spoke with some people on Twitter and they suggested that the real source of these images are multi resolution icns files (probably not vector at all). I couldn't find the location of these on disk but interesting to hear none-the-less.
Additionally they suggested I create the system NSImage and manually render it into a high res NSImage of my own. I'm doing this now and it's working for my needs. My code:
+ (NSImage *)pt_businessDefaultIcon
{
// Draws NSImageNameUser into a rendered bitmap.
// We do this because trying to create an NSData from
// [NSImage imageNamed:NSImageNameUser] directly results in a 32x32 image.
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
NSImage *renderedIcon = [[NSImage alloc] initWithSize:CGSizeMake(PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize)];
[renderedIcon lockFocus];
NSRect inRect = NSMakeRect(0, 0, PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize);
NSRect fromRect = NSMakeRect(0, 0, icon.size.width, icon.size.width);;
[icon drawInRect:inRect fromRect:fromRect operation:NSCompositeCopy fraction:1.0];
[renderedIcon unlockFocus];
return renderedIcon;
}
(Tried to post this as my answer but I don't have enough reputation?)
You seem to be ignoring the documentation. Both of your major questions are answered there. The Cocoa Drawing Guide (companion guide linked from the NSImage API reference) has an Images section you really need to read thoroughly and refer to any time you have rep/caching/sizing/quality issues.
...if I ask for the TIFFRepresentation I get a small icon when the
real NSImage I originally created seemed to be vector and would scale
up to fill my image views nicely.
Relevant subsections of the Images section for this question are: How an Image Representation is Chosen, Images and Caching, and Image Size and Resolution. By default, the -cacheMode for a TIFF image "Behaves as if the NSImageCacheBySize setting were in effect." Also, for in-memory scaling/sizing operations, -imageInterpolation is important: "Table 6-4 lists the available interpolation settings." and "NSImageInterpolationHigh - Slower, higher-quality interpolation."
I'm fairly certain this applies to a named system image as well as any other.
I was kinda hoping images made [ by loading an image from disk ] would
have a NSPDFImageRep I could use.
Relevant subsection: Image Representations. "...with file-based images, most of the images you create need only a single image representation." and "You might create multiple representations in the following situations, however: For printing, you might want to create a PDF representation or high-resolution bitmap of your image."
You get the representation that suits the loaded image. You must create a PDF representation for a TIFF image, for example. To do so at high resolution, you'll need to refer back to the caching mode so you can get higher-res items.
There are a lot of fine details too numerous to list because of the high number of permutations of images/creation mechanisms/settings/ and what you want to do with it all. My post is meant to be a general guide toward finding the specific information you need for your situation.
For more detail, add specific details: the code you attempted to use, the type of image you're loading or creating -- you seemed to mention two different possibilities in your fourth paragraph -- and what went wrong.
I would guess that the image is "hard wired" into the graphics system somehow, and the NSImage representation of it is merely a number indicating which hard-wired graphic it is. So likely what you need to do is to draw it and then capture the drawing.
Very generally, create a view controller that will render the image, reference the VC's view property to cause the view to be rendered, extract the contentView of the VC, get the contentView.layer, render the layer into a UIGraphics context, get the UIImage from the context, extract whatever representation you want from the UIImage.
(There may be a simpler way, but this is the one I ended up using in one case.)
(And, sigh, I suppose this scheme doesn't preserve scaling either.)

Resources