The code I'm working with destroys and then creates a new texture before setting a new eglImage as the texture's target:
glDeleteTextures(1, &m_tex);
eglDestroyImageKHR(display, m_eglImage);
glGenTextures(1, &m_tex);
glBindTexture(GL_TEXTURE_2D, m_tex);
m_eglImage = newEglImage;
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, m_eglImage);
However there is an issue with a shared egl context referring to the original texture or an fbo with the original texture attached (haven't found the culprit yet).
I'd like to confirm if it's valid/correct to leave the original texture and simply set a new EglImage target, replacing an existing one? like this:
eglDestroyImageKHR(display, m_eglImage);
m_eglImage = newEglImage;
glBindTexture(GL_TEXTURE_2D, m_tex);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, m_eglImage);
I've tried this and it works on my platform and works around the issue, but I'm looking for assurances that it is valid and will work on all platforms, not just a fluke of my gpu driver?
Related
I have a very strange problem, which I did not find mentioned anywhere. My company develops plugins for various hosts. Right now we are trying to move our OpenGL code to Metal. I tried with some of hosts (Like Logic and Cubase), and, it worked. Here is the example:
However, recently, new versions of those apps became available, compiled with 10.14 MacOS SDK, and here is what I started to get:
So, we have 2 problems: Color and Flipped textures. I found a solution for color (see code below), but I have absolutely no idea how to solve textures problem! I can, of course, flip the textures, but then on the previous app versions, they will become corrupt.
I believe that something has change in PNG loading, since, if you look carefully - text textures, that are generated on the fly, look the same in both occasions.
Here is my code:
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
while imageData is a memory where PNG is placed. I also tried this approach:
CGDataProvider* imageData = CGDataProviderCreateWithData(nullptr, imageBuffer, imageBufferSize, nullptr);
CGImage* loadedImage = CGImageCreateWithPNGDataProvider(imageData, nullptr, false, kCGRenderingIntentDefault);
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithCGImage:loadedImage options:0 error:&error];
And got EXACTLY the same result.
The issue is happening with all applications built with the latest, 10.14 SDK on 10.14 OS. Does anyone have a clue what causes it, or at least, give me a way to understand what SDK I was compiled with?
MTKTextureLoaderOptionOrigin a key used to specify when to flip the pixel coordinates of the texture.
If you omit this option, the texture loader doesn’t flip loaded textures.
This option cannot be used with block-compressed texture formats, and can be used only with 2D, 2D array, and cube map textures. Each mipmap level and slice of a texture are flipped.
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE, MTKTextureLoaderOptionOrigin : #TRUE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
We load textures through SDL_image, then we load them into OpenGL through textimage2d:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h,
0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, (GLuint**)texture->pixels );
On my windows machine, that runs fine, but on my friends Mac machine the colors seem to be shifted around. He got a strong blueish texture on his display. Of course, that has to deal with the internalFormat (here GL_BGRA_EXT). We tried all we found and running (compiling correctly) but none gives a correct output for mac. Any ideas how to get an idea about how mac computes the pixels array provided by SDL_image?
I haven't started using SDL openGL yet but here are some potential keywords that might be relevant.
RGBA/BGRA?
Colors are off in SDL program
http://www.opengl.org/wiki/Common_Mistakes
#if for TARGET_CPU_PPC and using a consistent value like GL_RGBA for
everything.
Hope this can get you started.
Okay thanks to your links, some investigation and lost brain cells, we came to the result that we can detect the order of pixel data trough the defined masks in the sdl surface (surface->format->Rmask) to decide if we use GL_UNSIGNED_INT_8_8_8_8 or GL_UNSIGNED_INT_8_8_8_8_REV.
So I've setup CVPixelBuffer's and tied them to OpenGL FBOs successfully on iOS. But now trying to do the same on OSX has me snagged.
The textures from CVOpenGLTextureCacheCreateTextureFromImage return as GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D targets.
I've found the kCVOpenGLBufferTarget key, but it seems like it is supposed to be used with CVOpenGLBufferCreate not CVPixelBufferCreate.
Is it even possible to get GL_TEXTURE_2D targeted textures on OSX with CVPixelBufferCreate, and if so how?
FWIW a listing of the CV PBO setup:
NSDictionary *bufferAttributes = #{ (__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA), (__bridge NSString *)kCVPixelBufferWidthKey : #(size.width), (__bridge NSString *)kCVPixelBufferHeightKey : #(size.height), (__bridge NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{ } };
if (pool)
{
error = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &renderTarget);
}
else
{
error = CVPixelBufferCreate(kCFAllocatorDefault, (NSUInteger)size.width, (NSUInteger)size.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)bufferAttributes, &renderTarget);
}
ZAssert(!error, #"Couldn't create pixel buffer");
error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, [[NSOpenGLContext context] CGLContextObj], [[NSOpenGLContext format] CGLPixelFormatObj], NULL, &textureCache);
ZAssert(!error, #"Could not create texture cache.");
error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, renderTarget, NULL, &renderTexture);
ZAssert(!error, #"Couldn't create a texture from cache.");
GLuint reference = CVOpenGLTextureGetName(renderTexture);
GLenum target = CVOpenGLTextureGetTarget(renderTexture);
UPDATE: I've been able to successfully use the resulting GL_TEXTURE_RECTANGLE textures. However, this will cause a lot of problems with the shaders for compatibility between iOS and OSX. And anyway I'd rather continue to use normalised texture coordinates.
If it isn't possible to get GL_TEXTURE_2D textures directly from a CVPixelBuffer in this manner, would it be possible to create a CVOpenGLBuffer and have a CVPixelBuffer attached to it to pull the pixel data?
Just came across this, and I'm going to answer it even though it's old, in case others encounter it.
iOS uses OpenGL ES (originally 2.0, then 3.0). OS X uses regular old (non-ES) OpenGL, with a choice of Core profile (3.0+ only) or Compatibility profile (up to 3.2).
The difference here is that OpenGL (non-ES) was designed a long time ago, when there were many restrictions on texture sizes. As cards lifted those restrictions, extensions were added, including GL_TEXTURE_RECTANGLE. Now it's no big deal for any GPU to support any size texture, but for API compatibility reasons they can't really fix OpenGL. Since OpenGL ES is technically a parallel, but separate, API, which was designed much more recently, they were able to correct the problem from the beginning (i.e. they never had to worry about breaking old stuff). So for OpenGL ES they never defined a GL_TEXTURE_RECTANGLE, they just defined that GL_TEXTURE_2D has no size restrictions.
Short answer - OS X uses Desktop OpenGL, which for legacy compatibility reasons still treats rectangle textures separately, while iOS uses OpenGL ES, which places no size restrictions on GL_TEXTURE_2D, and so never offered a GL_TEXTURE_RECTANGLE at all. Thus, on OS X, CoreVideo produces GL_TEXTURE_RECTANGLE objects, because GL_TEXTURE_2D would waste a lot of memory, while on iOS, it produces GL_TEXTURE_2D objects because GL_TEXTURE_RECTANGLE doesn't exist, nor is it necessary.
It's an unfortunate incompatibility between OpenGL and OpenGL ES, but it is what it is and there's nothing to be done but code around it. Or, now, you can (and probably should consider) moving on to Metal.
As this appears to have been left dangling and is something I recently dealt with: no, GL_TEXTURE_RECTANGLE appears to be the only use case. To get to a GL_TEXTURE_2D you're going to have to render to texture.
FWIW, As of 2023, the modern CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture. Just ensure your CVPixelbuffers are IOSurface backed by including (id)kCVPixelBufferIOSurfacePropertiesKey: #{}, in the attributes to [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes]
You will still be getting a GL_TEXTURE_RECTANGLE, but you'll be getting it way faster. I made a little shader so that I can render the GL_TEXTURE_RECTANGLE into a GL_TEXTURE_2D bound to a frame buffer.
I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.
There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.