Core Video pixel buffers as GL_TEXTURE_2D - macos

So I've setup CVPixelBuffer's and tied them to OpenGL FBOs successfully on iOS. But now trying to do the same on OSX has me snagged.
The textures from CVOpenGLTextureCacheCreateTextureFromImage return as GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D targets.
I've found the kCVOpenGLBufferTarget key, but it seems like it is supposed to be used with CVOpenGLBufferCreate not CVPixelBufferCreate.
Is it even possible to get GL_TEXTURE_2D targeted textures on OSX with CVPixelBufferCreate, and if so how?
FWIW a listing of the CV PBO setup:
NSDictionary *bufferAttributes = #{ (__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA), (__bridge NSString *)kCVPixelBufferWidthKey : #(size.width), (__bridge NSString *)kCVPixelBufferHeightKey : #(size.height), (__bridge NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{ } };
if (pool)
{
error = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &renderTarget);
}
else
{
error = CVPixelBufferCreate(kCFAllocatorDefault, (NSUInteger)size.width, (NSUInteger)size.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)bufferAttributes, &renderTarget);
}
ZAssert(!error, #"Couldn't create pixel buffer");
error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, [[NSOpenGLContext context] CGLContextObj], [[NSOpenGLContext format] CGLPixelFormatObj], NULL, &textureCache);
ZAssert(!error, #"Could not create texture cache.");
error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, renderTarget, NULL, &renderTexture);
ZAssert(!error, #"Couldn't create a texture from cache.");
GLuint reference = CVOpenGLTextureGetName(renderTexture);
GLenum target = CVOpenGLTextureGetTarget(renderTexture);
UPDATE: I've been able to successfully use the resulting GL_TEXTURE_RECTANGLE textures. However, this will cause a lot of problems with the shaders for compatibility between iOS and OSX. And anyway I'd rather continue to use normalised texture coordinates.
If it isn't possible to get GL_TEXTURE_2D textures directly from a CVPixelBuffer in this manner, would it be possible to create a CVOpenGLBuffer and have a CVPixelBuffer attached to it to pull the pixel data?

Just came across this, and I'm going to answer it even though it's old, in case others encounter it.
iOS uses OpenGL ES (originally 2.0, then 3.0). OS X uses regular old (non-ES) OpenGL, with a choice of Core profile (3.0+ only) or Compatibility profile (up to 3.2).
The difference here is that OpenGL (non-ES) was designed a long time ago, when there were many restrictions on texture sizes. As cards lifted those restrictions, extensions were added, including GL_TEXTURE_RECTANGLE. Now it's no big deal for any GPU to support any size texture, but for API compatibility reasons they can't really fix OpenGL. Since OpenGL ES is technically a parallel, but separate, API, which was designed much more recently, they were able to correct the problem from the beginning (i.e. they never had to worry about breaking old stuff). So for OpenGL ES they never defined a GL_TEXTURE_RECTANGLE, they just defined that GL_TEXTURE_2D has no size restrictions.
Short answer - OS X uses Desktop OpenGL, which for legacy compatibility reasons still treats rectangle textures separately, while iOS uses OpenGL ES, which places no size restrictions on GL_TEXTURE_2D, and so never offered a GL_TEXTURE_RECTANGLE at all. Thus, on OS X, CoreVideo produces GL_TEXTURE_RECTANGLE objects, because GL_TEXTURE_2D would waste a lot of memory, while on iOS, it produces GL_TEXTURE_2D objects because GL_TEXTURE_RECTANGLE doesn't exist, nor is it necessary.
It's an unfortunate incompatibility between OpenGL and OpenGL ES, but it is what it is and there's nothing to be done but code around it. Or, now, you can (and probably should consider) moving on to Metal.

As this appears to have been left dangling and is something I recently dealt with: no, GL_TEXTURE_RECTANGLE appears to be the only use case. To get to a GL_TEXTURE_2D you're going to have to render to texture.

FWIW, As of 2023, the modern CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture. Just ensure your CVPixelbuffers are IOSurface backed by including (id)kCVPixelBufferIOSurfacePropertiesKey: #{}, in the attributes to [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes]
You will still be getting a GL_TEXTURE_RECTANGLE, but you'll be getting it way faster. I made a little shader so that I can render the GL_TEXTURE_RECTANGLE into a GL_TEXTURE_2D bound to a frame buffer.

Related

Reverted PNG Textures with Metal, when compiled with Macos SDK 10.14

I have a very strange problem, which I did not find mentioned anywhere. My company develops plugins for various hosts. Right now we are trying to move our OpenGL code to Metal. I tried with some of hosts (Like Logic and Cubase), and, it worked. Here is the example:
However, recently, new versions of those apps became available, compiled with 10.14 MacOS SDK, and here is what I started to get:
So, we have 2 problems: Color and Flipped textures. I found a solution for color (see code below), but I have absolutely no idea how to solve textures problem! I can, of course, flip the textures, but then on the previous app versions, they will become corrupt.
I believe that something has change in PNG loading, since, if you look carefully - text textures, that are generated on the fly, look the same in both occasions.
Here is my code:
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
while imageData is a memory where PNG is placed. I also tried this approach:
CGDataProvider* imageData = CGDataProviderCreateWithData(nullptr, imageBuffer, imageBufferSize, nullptr);
CGImage* loadedImage = CGImageCreateWithPNGDataProvider(imageData, nullptr, false, kCGRenderingIntentDefault);
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithCGImage:loadedImage options:0 error:&error];
And got EXACTLY the same result.
The issue is happening with all applications built with the latest, 10.14 SDK on 10.14 OS. Does anyone have a clue what causes it, or at least, give me a way to understand what SDK I was compiled with?
MTKTextureLoaderOptionOrigin a key used to specify when to flip the pixel coordinates of the texture.
If you omit this option, the texture loader doesn’t flip loaded textures.
This option cannot be used with block-compressed texture formats, and can be used only with 2D, 2D array, and cube map textures. Each mipmap level and slice of a texture are flipped.
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE, MTKTextureLoaderOptionOrigin : #TRUE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];

OpenGL equivalent to GL_POINT_SIZE_ARRAY_OES?

I'm trying to draw point sprites in a small Mac app. I want each sprite to have its own size, and I know that OpenGL ES has the client state "GL_POINT_SIZE_ARRAY_OES".
I did some googling and discovered that there is a similar value "GL_POINT_SIZE_ARRAY_APPLE" which (you'd think) should do the same thing. For some reason, though, it doesn't seem to. Here's my drawing code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_POINT_SIZE_ARRAY_APPLE);
glVertexPointer(2, GL_FLOAT, sizeof(SpriteData), spriteVertices);
glPointSizePointerAPPLE(GL_FLOAT, sizeof(SpriteData), spriteVertices + sizeof(LocationF));
glDrawArrays(GL_POINTS, 0, spriteCount);
glDisableClientState(GL_POINT_SIZE_ARRAY_APPLE);
glDisableClientState(GL_VERTEX_ARRAY);
SpriteData is a struct containing the vertex/size data of each sprite. spriteVertices is just an interleaved array of that struct.
The vertex pointer is working fine; it's drawing the sprites, but seems to be ignoring their size values. It instead defaults to the value set by glPointSize().
Despite the fact that this code compiles with no warnings, it seems very suspicious to me that googling "GL_POINT_SIZE_ARRAY_APPLE" brings up almost no results. Is this a useless parameter? If so, how else can I achieve what I want?
There is no official OpenGL extension which exposes a GL_POINT_SIZE_ARRAY_APPLE extension. This may be some detritus in Apple's headers, but you shouldn't use it. Just use a generic vertex array and use the value you pass as a point size.
If you want cross-platform code, you should avoid system-dependent headers. Instead, use a proper OpenGL loader, which comes with cross-platform headers that won't have system-dependent, non-standard detritus in them.

GL_INVALID_VALUE from glFramebufferTexture2DEXT only after delete/realloc texture

I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.

Buffer issue with an OpenGL ES 3D view in a Cocos2D app

I'm trying to insert an OpenGL ES 3D view in a Cocos2D app on the IPad. I'm relatively new to these frameworks, so I basically added those lines in my CCLayer:
CGRect rScreen;
// some code to define the bounds and origin of my frame
EAGL3DView * view3d = [[EAGL3DView alloc] initWithFrame:rScreen] ;
[[[CCDirector sharedDirector] openGLView] addSubview: view3d];
[view3d startAnimation]
The code I'm using for the 3D part is based on a sample code from Apple Developer : http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html
The only changes I made were to create my view programmatically (no xib file, initWithCoder -> initWithFrame...), and I also renamed the EAGLView class & files to EAGL3DView so as not to interfere with the EAGLView that comes along with Cocos2D.
Now onto my problem: when I run these, I get an "OpenGL error 0x0502 in -[EAGLView swapBuffers]", the 3D view being properly displayed but with a completely pink screen otherwise.
I went into the swapBuffers function in Cocos2d EAGLView, and it turns out the only block of code that is important is this one:
if(![context_ presentRenderbuffer:GL_RENDERBUFFER_OES])
CCLOG(#"cocos2d: Failed to swap renderbuffer in %s\n", __FUNCTION__);
which btw does not enter the "if" condition (presentRenderbuffer does not return a null value, but is not correct though since the CHECK_GL_ERROR() afterwards gives an 0x0502 error).
So I understand that there is some kind of incorrect overriding of the OpenGL ES renderbuffer by my 3D view (since Cocos2d also uses OpenGL ES) causing the Cocos2D view not to work properly. This is what I got so far, and I can't figure out precisely what needs to be done in order to fix it. So what do you think?
Hoping this is only a newbie problem…
Pixelvore
I think the correct approach for what you are trying to do is:
create your own custom CCSprite/CCNode class;
put all the GL code that you are using from the Apple sample into that class (i.e., overriding the draw or visit method of the class).
If you want to try and make the two GL views work nicely together, you could try reading this post, which will explain how you associate different buffers to your views.
As to the first approach, have a look at this post and this one.
To be true, the first approach might be more complex (depending on how the Apple sample is doing open gl), but will use less memory and will be more optimized that the second.

How to toggle fullscreen on Mac OSX using SDL 1.3 so that it actually works?

There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.

Resources