OES/EXT/ARB_framebuffer_object - opengl-es

What are the differences between OES/EXT/ARB_framebuffer_object extensions. Can all of these extensions be used with OpenGLES 1.1 or OpenGLES2.0 applications? Or are there any guidlines w.r.t what extension to be used with which version of GLESx.x?

OK After some googling i found the below piece of info...
GLES FBO
a. are core under GLES2
b. under GLES1, exposed via the extension GL_OES_framebuffer_object,
under which API entry point are glFunctionNameOES
OpenGL 1.x/2.x with GL_EXT_framebuffer_object
under which API entry points are glSomeFunctionEXT
OpenGL 3.x FBO/GL_ARB_framebuffer_object
under GL 3.x, FBO's are core and API points are glSomeFunction
also, there is a "backport" exttention for GL 2.x, GL_ARB_framebuffer_object
API entry point are glSomeFunction(). Note the lack of EXT or ARB suffix.
Token naming:
1a. no suffix
1b. _OES
_EXT
no suffix.
fortuantely, the token names map to the same values
Additionally, their usage is different:
1a,1b: Depth and stencil buffers are attatched sperately as render buffers
or also possibly supported is attatching both as one buffer with
the extension GL_OES_packed_depth_stencil.
Depth buffer is defualt at 16bits!
2,3: The spec allows for attatching depth and stencil seperately, but
all consumer level desktop hardware does not support this, rather to
attatch both a stencil and depth buffer calls for a depth-stencil texture.
2. extension GL_EXT_packed_depth_stencil, type is GL_DEPTH24_STENCIL8_EXT
3. part of the FBO spec, type is GL_DEPTH24_STENCIL8
Note: the value of the tokens GL_DEPTH24_STENCIL8 and GL_DEPTH24_STENCIL8_EXT
are the same.
Issues with GL_EXT_framebuffer_object
a) GL_EXT_framebuffer_object might not be listed in GL 3.x contexts because
FBO's are core.
b) also, if have a GL 2.x context with newer hardware, possible that
GL_EXT_framebuffer_object is not listed but GL_ARB_framebuffer_object is
Differences of capabilities:
FBO support via 3.x/GL_ARB_framebuffer_object allows for color buffer attathments
to have different types and resoltions, additionally, MSAA and blit functionality
is part of 3.x core and part of GL_ARB_framebuffer_object.
FBO support via GL_EXT_framebuffer_object, both blit and MSAA support
are exposed as separate extensions.

Related

In OpenGL ES what is an "external image"? Why do we need GL_OES_EGL_image_external?

I am reading through spec for external images. It says:
This extension provides a mechanism for creating EGLImage texture targets
from EGLImages. This extension defines a new texture target,
TEXTURE_EXTERNAL_OES.
I have done my best but I can't find out what an "external image is". This extension, and many of the related extension specs, reference "EGLImages" and similar things but I can't figure out what they are.
Why do I need this?
Typically to create an image I load a file from disk. I believe that is "external".
This question basically says it is an image not created by the graphics driver but wouldn't mean virtually all images ever created would be EGLImages or "external images"? When using OpenGL I don't remember having to worry about if my image was external or not.
Can somebody explain what an "External" image is, why it is needed (mainly I see this w/r/t OpenGL ES) and why these extensions are needed? Frankly I am not sure what an "EGL Image" is either, or why they make a distinction.
Thank you
This is a late answer.
An external image AKA external texture is typically used to supply frames from an image stream (e.g. camera preview, decoded video) as OpenGL textures. Such frames usually have special color encodings and memory layouts (e.g. multi-plane YUV). The extension mentioned above allows sampling such images as if they were regular OpenGL textures (with a few limitations).

How to use gl_LastFragData in WEBGL?

I'm currently working on a THREE.JS project. I need to customize a fragment_shader with programmable blending instead of using predefined blending. To do this, I want to use gl_LastFragData in my fragment_shader. But I got this error.Error image
How can I use gl_LastFragData in WEBGL or is there any other equivalent way?
gl_LastFragData is not present in webgl or opengl specs.
Hovewer there is extensions mechanism in these API's.
You can query for available extensions on program start and see if desired features are available. To use available extension in shader program you should activate it in shader source code.
You error message says that you try to use extension functionality when it is unavailable.
Speaking of your exact case - check EXT_shader_framebuffer_fetch extension. Also worth checking ARM_shader_framebuffer_fetch, NV_shader_framebuffer_fetch.
Hovewer these extensions are written against OpenGL 2.0, OpenGL ES2.0. I'm not sure if they do exist as webgl extensions.
Expect framebuffer fetch functionality to be present on mobile devices and not present on desktop devices. As far as I understand that comes from difference between GPU architectures for mobile and desktop (tile-based vs immediate mode rasterizers). Tile-based gpu can use tile local memory for effective lookup.
There is no gl_LastFragData in WebGL. WebGL is based on OpenGL ES 2.0, WebGL2 is based on OpenGL ES 3.0. Neither of those support gl_LastFragData
The traditional way of using a previous result is to pass it in as a texture when generating the next result
someOperation1(A, B) -> TempTexture1
someOperation2(TempTexture1, C) -> TempTexture2
someOperation3(TempTexture2, D) -> TempTexture1
someOperation4(TempTexture1, E) -> TempTexture2
someOperation5(TempTexture2, F) -> resultTexture/fb/canvas/window
Example

THREE.js BufferGeometry index size

Does THREE.BufferGeometry currently support 32-bit index buffers? WebGL 1.0 doesn't, unless extension "OES_element_index_uint" is explicitly enabled. But is this somehow implemented in THREE.js by default? I could not find information about this anywhere in the docs.
you could look at the source and see that yes, three.js supports 32bit index buffers
Then you could look at webglstats.com and see it claims 98% of devices support that extension. You could also probably reason that any device that doesn't support that extension is probably old and underpowered and not worth worrying about.
TLDR; Yes, Three.js supports 32-bit index buffers

Changing default OpenGL context profile version

To create an OpenGL context with profile version 3.2, I need to add the attributes to the pixel format when creating the context:
...,
NSOpenGLPFAOpenGLProfile, (NSOpenGLPixelFormatAttribute) NSOpenGLProfileVersion3_2Core,
...,
Is there a way (environment variable, global variable, function to call before the NSOpenGLPixelFormat is created, ...), to alter the default OpenGL profile version, for when no such attributes are specified. It defaults to an older version on OS X 10.10. I'm trying to integrate code that relies on newer OpenGL features with a framework (ROOT) that sets up the OpenGL context and gives no way to alter the parameters.
And is there a way to change the pixel format attributes of an OpenGL context after it has been set up?
Pixel formats associated with OpenGL contexts (the proper terminology here is "default framebuffer") are always immutable. OpenGL itself does not control or dictate this, it is actually the window system API (e.g. WGL, GLX, EGL, NSOpenGL) that impose this restriction.
The only way I see around this particular problem is if you create your own offscreen (core 3.2) context that shares resources with the legacy (2.1) context that ROOT created. If OS X actually allows you to do this context sharing (it is iffy, because the two contexts probably count as different renderers), you can draw into a renderbuffer object in your core context and then blit that renderbuffer into your legacy context using glBlitFramebuffer (...).
Note that Framebuffer Objects are not a context shareable resource. What you wind up sharing in this example is the actual image attachment (Renderbuffer or Texture), and that means you will have to maintain separate FBOs with the same attachments in both contexts.
glutInitContextVersion (3, 3);
can set the openGL version to 3.3,you can change the version as you like.

Sharing an OpenGL framebuffer between processes in Mac OS X

Is there any way in Mac OS X to share an OpenGL framebuffer between processes? That is, I want to render to an off-screen target in one process and display it in another.
You can do this with DirectX (actually DXGI) in Windows by creating a surface (the DXGI equivalent of an OpenGL framebuffer) in shared mode, getting an opaque handle for that surface, passing that to another process via whatever means you like, then creating a surface in that other process, but passing in the existing handle. You use the surface as a render target in one process then and use it as a texture in the other to consume as you wish. And in fact the whole compositing Window system works like this from Vista onwards.
If this isn't possible I can of course get the contents of the framebuffer into system memory and use cross-process shared memory to get it to the target process, then upload it again from there, but that would be unnecessarily slow.
Depending on what you're really trying to do this sample code project may be what you want:
MultiGPUIOSurface sample code
It really depends upon the context of how you're using it.
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
Some of these objects may contain views (alternate interpretations) of
part or all of the data store of another object. Examples are texture
buffer objects, which contain a view of a buffer object’s data store,
and texture views, which contain a view of another texture object’s
data store. Views act as references on the object whose data store is
viewed.
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
Chapter 5 / OpenGL-4.4 core specification
The reason you can do those things on Windows and not OS X is that graphics obviously utilizes an API that allows DirectX contexts to be shared between those processes. If OS X doesn't have the capability within the OpenGL API then you're going to have to come up with your own solution. Take a look at OpenGL Programming Guide for Mac, there's a small section that describes using multiple OpenGL contexts.

Resources