OpenGL ES 2.0 attach texture to stencil buffer - opengl-es

It is known, that OpenGL ES 2.0 does not have GL_STENCIL_INDEX8, GL_DEPTH24_STENCIL8_OES, etc ...
Is it possible to use GL_LUMINANCE, GL_ALPHA textures for this purporses?
glGenTextures(1, &byteTex);
glBindTexture(GL_TEXTURE_2D, byteTex);
glTexImage2D(GL_TEXTURE_2D, 0,GL_LUMINANCE, width(), height(), 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, byteTex, 0);
In other words - is it possible, to have stencil buffer rendered to texture?
P.S. It is possible to "cover" stenciled area with quad, but....

No. In ES 2.0, you can only use renderbuffers for the stencil attachment.
From section "Framebuffer Attachment Completeness" in the ES 2.0 spec (page 117):
If attachment is STENCIL_ATTACHMENT, then image must have a stencil renderable internal format.
Table 4.5 on the same page lists STENCIL_INDEX8 as the only internal format to be stencil-renderable. And on the previous page, it says:
Formats not listed in table 4.5, including compressed internal formats, are not color-, depth-, or stencil-renderable, no matter which components they contain.
ES 2.0 is a very minimal version of OpenGL. You're clearly exceeding the scope of ES 2.0. ES 3.0 introduces depth/stencil textures, but still does not support sampling the stencil part of those textures. ES 3.1 introduces sampling the stencil part of depth/stencil textures.
There is a OES_texture_stencil8 extension defined, but it looks like a mess to me. It says that it is based on ES 3.0, but then partly references the ES 3.1 spec. And it says that it has a dependency on a OES_stencil_texturing extension, which is nowhere to be found on www.khronos.org, or anywhere else in the Google visible part of the internet. But since it's for ES 3.x, it wouldn't help you anyway.

Related

Is HDR rendering possible in OpenGL es?

I'm NOT talking about actually rendering to HDR diplays here.
I'm trying to get my game to look better and one of the ways I've found online is to use an HDR pipeline in post-processing. According to this tutorial, to accomplish this, you need to render to a framebuffer with a texture that's set to an internal format of GL_RGB16f, GL_RGBA16, GL_RGB32F or GL_RGBA32F. Unfortunatly, I looked in the OpenGL es 3.0 docs, and it tells me (at page 132) that there is no floating-point type internal format that is color-renderable, which leads to a non-complete frameBuffer. Am I oblivious to something very obvious, or is an HDR pipline in OpenGL es 3.0 impossible?
I Got it working by generating the Texture this way:
FloatBuffer floatBuffer = ByteBuffer.allocateDirect(w * h * 4 * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
//allocateDirect( width * height * ComponentNumber * bitPerFloat)
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA16F, w, h, 0, GLES30.GL_RGBA, GLES30.GL_FLOAT, floatBuffer);

Color format of texture created by CVOpenGLTextureCacheCreateTextureFromImage (for OSX/MAC, NOT ES)

I've been experimenting with OpenGL on OSX with a ES implementation as reference code. The goal is to render an image buffer (CVImageBuffer) which is in the yuv format.
I need to know how to specify the color format (or is it fixed to a particular type?) of the texture that gets created using CVOpenGLTextureCacheCreateTextureFromImage() API. I'm trying to figure this out so that I can appropriately access/process the colors in the fragment shader (RGB vs YUV).
Also, I can see that there is an "internalFormat" option that can be used to control this in the OpenGL-ES version of the API.
Thanks!

OES/EXT/ARB_framebuffer_object

What are the differences between OES/EXT/ARB_framebuffer_object extensions. Can all of these extensions be used with OpenGLES 1.1 or OpenGLES2.0 applications? Or are there any guidlines w.r.t what extension to be used with which version of GLESx.x?
OK After some googling i found the below piece of info...
GLES FBO
a. are core under GLES2
b. under GLES1, exposed via the extension GL_OES_framebuffer_object,
under which API entry point are glFunctionNameOES
OpenGL 1.x/2.x with GL_EXT_framebuffer_object
under which API entry points are glSomeFunctionEXT
OpenGL 3.x FBO/GL_ARB_framebuffer_object
under GL 3.x, FBO's are core and API points are glSomeFunction
also, there is a "backport" exttention for GL 2.x, GL_ARB_framebuffer_object
API entry point are glSomeFunction(). Note the lack of EXT or ARB suffix.
Token naming:
1a. no suffix
1b. _OES
_EXT
no suffix.
fortuantely, the token names map to the same values
Additionally, their usage is different:
1a,1b: Depth and stencil buffers are attatched sperately as render buffers
or also possibly supported is attatching both as one buffer with
the extension GL_OES_packed_depth_stencil.
Depth buffer is defualt at 16bits!
2,3: The spec allows for attatching depth and stencil seperately, but
all consumer level desktop hardware does not support this, rather to
attatch both a stencil and depth buffer calls for a depth-stencil texture.
2. extension GL_EXT_packed_depth_stencil, type is GL_DEPTH24_STENCIL8_EXT
3. part of the FBO spec, type is GL_DEPTH24_STENCIL8
Note: the value of the tokens GL_DEPTH24_STENCIL8 and GL_DEPTH24_STENCIL8_EXT
are the same.
Issues with GL_EXT_framebuffer_object
a) GL_EXT_framebuffer_object might not be listed in GL 3.x contexts because
FBO's are core.
b) also, if have a GL 2.x context with newer hardware, possible that
GL_EXT_framebuffer_object is not listed but GL_ARB_framebuffer_object is
Differences of capabilities:
FBO support via 3.x/GL_ARB_framebuffer_object allows for color buffer attathments
to have different types and resoltions, additionally, MSAA and blit functionality
is part of 3.x core and part of GL_ARB_framebuffer_object.
FBO support via GL_EXT_framebuffer_object, both blit and MSAA support
are exposed as separate extensions.

Core Video pixel buffers as GL_TEXTURE_2D

So I've setup CVPixelBuffer's and tied them to OpenGL FBOs successfully on iOS. But now trying to do the same on OSX has me snagged.
The textures from CVOpenGLTextureCacheCreateTextureFromImage return as GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D targets.
I've found the kCVOpenGLBufferTarget key, but it seems like it is supposed to be used with CVOpenGLBufferCreate not CVPixelBufferCreate.
Is it even possible to get GL_TEXTURE_2D targeted textures on OSX with CVPixelBufferCreate, and if so how?
FWIW a listing of the CV PBO setup:
NSDictionary *bufferAttributes = #{ (__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA), (__bridge NSString *)kCVPixelBufferWidthKey : #(size.width), (__bridge NSString *)kCVPixelBufferHeightKey : #(size.height), (__bridge NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{ } };
if (pool)
{
error = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &renderTarget);
}
else
{
error = CVPixelBufferCreate(kCFAllocatorDefault, (NSUInteger)size.width, (NSUInteger)size.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)bufferAttributes, &renderTarget);
}
ZAssert(!error, #"Couldn't create pixel buffer");
error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, [[NSOpenGLContext context] CGLContextObj], [[NSOpenGLContext format] CGLPixelFormatObj], NULL, &textureCache);
ZAssert(!error, #"Could not create texture cache.");
error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, renderTarget, NULL, &renderTexture);
ZAssert(!error, #"Couldn't create a texture from cache.");
GLuint reference = CVOpenGLTextureGetName(renderTexture);
GLenum target = CVOpenGLTextureGetTarget(renderTexture);
UPDATE: I've been able to successfully use the resulting GL_TEXTURE_RECTANGLE textures. However, this will cause a lot of problems with the shaders for compatibility between iOS and OSX. And anyway I'd rather continue to use normalised texture coordinates.
If it isn't possible to get GL_TEXTURE_2D textures directly from a CVPixelBuffer in this manner, would it be possible to create a CVOpenGLBuffer and have a CVPixelBuffer attached to it to pull the pixel data?
Just came across this, and I'm going to answer it even though it's old, in case others encounter it.
iOS uses OpenGL ES (originally 2.0, then 3.0). OS X uses regular old (non-ES) OpenGL, with a choice of Core profile (3.0+ only) or Compatibility profile (up to 3.2).
The difference here is that OpenGL (non-ES) was designed a long time ago, when there were many restrictions on texture sizes. As cards lifted those restrictions, extensions were added, including GL_TEXTURE_RECTANGLE. Now it's no big deal for any GPU to support any size texture, but for API compatibility reasons they can't really fix OpenGL. Since OpenGL ES is technically a parallel, but separate, API, which was designed much more recently, they were able to correct the problem from the beginning (i.e. they never had to worry about breaking old stuff). So for OpenGL ES they never defined a GL_TEXTURE_RECTANGLE, they just defined that GL_TEXTURE_2D has no size restrictions.
Short answer - OS X uses Desktop OpenGL, which for legacy compatibility reasons still treats rectangle textures separately, while iOS uses OpenGL ES, which places no size restrictions on GL_TEXTURE_2D, and so never offered a GL_TEXTURE_RECTANGLE at all. Thus, on OS X, CoreVideo produces GL_TEXTURE_RECTANGLE objects, because GL_TEXTURE_2D would waste a lot of memory, while on iOS, it produces GL_TEXTURE_2D objects because GL_TEXTURE_RECTANGLE doesn't exist, nor is it necessary.
It's an unfortunate incompatibility between OpenGL and OpenGL ES, but it is what it is and there's nothing to be done but code around it. Or, now, you can (and probably should consider) moving on to Metal.
As this appears to have been left dangling and is something I recently dealt with: no, GL_TEXTURE_RECTANGLE appears to be the only use case. To get to a GL_TEXTURE_2D you're going to have to render to texture.
FWIW, As of 2023, the modern CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture. Just ensure your CVPixelbuffers are IOSurface backed by including (id)kCVPixelBufferIOSurfacePropertiesKey: #{}, in the attributes to [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes]
You will still be getting a GL_TEXTURE_RECTANGLE, but you'll be getting it way faster. I made a little shader so that I can render the GL_TEXTURE_RECTANGLE into a GL_TEXTURE_2D bound to a frame buffer.

Can OpenGL ES render into CPU memory?

Is it possible to render all of the OpenGL ES things into a normal allocated buffer instead of the framebuffer:
/* render into this buffer */
GLubyte* buffer =
(GLubyte*) calloc(width * height * 4, sizeof(GLubyte));
I want to be able to convert those rendered images into textures for other uses.
I'm using OpenGL ES 1.3 with the standard C API.
For this you won't get around a call to glReadPixels, which copies the content of the framebuffer into system memory. But when you want to copy it into a texture you can do this directly using glCopyTex(Sub)Image2D or by using FBOs and rendering directly into the texture without the need for a copy (but I'm not sure if FBOs are supported in ES). But of course, you cannot render directly into system memory (for textures it works using FBOs, as they are stored in GPU memory).

Resources