I'm trying to build OpenSceneGraph 3.2 for the Ubuntu armhf architecture, but I'm getting a compile error about a symbol not found. The symbol in question is glReadBuffer. I looked at GLES2/gl2.h header, and indeed, that symbol is not there. However, the symbol is present in GLES3/gl3.h, and documentation online suggests that the function was added in OpenGL ES 3.0. However, I did find a function named glReadBufferNV in GLES2/gl2ext.h (which is not #include'd in the source files.
I'm wondering if glReadBufferNV can be used instead of glReadBuffer, and what might be the possible side effects. I'm suspecting that the NV stands for Nvidia, and that it is a Nvidia-only implementation. Is this correct? If so, is there any way to get glReadBuffer in OpenGL ES 2.0 (I am under the impression that OpenSceneGraph can be built under OpenGL ES 2.0)?
Edit: As it turned out, the code that builds this portion of OpenSceneGraph was excluded when building with OpenGL ES or OpenGL 3.0. However, I'm still interested in what's special about glReadBufferNV.
As your research suggests, glReadBuffer was added to ES for 3.0; it is not defined in 2.0. Prior to that, as per the header file you found, an extension defined glReadBufferNV — specifically the NV_read_buffer extension.
So what's happened is that something wasn't in the spec, at least Nvidia thought it would be useful, so they've implemented an OpenGL extension, which has subsequently been discussed at Khronos, had all the edge cases or ambiguities dealt with and has eventually made its way into the core spec.
That's generally how GL development proceeds: extensions come along to provide functionality that's not yet in the main library, they're discussed and refined and adopted into the core spec if appropriate.
Comparing the official specification for glReadBuffer with the extension documentation, the extension has a few ties into other extensions that you wouldn't expect to make it into the core spec (e.g. COLOR_ATTACHMENTi_NV is supported as a source) but see resolved issue 7:
Version 6 of this specification isn't compatible with OpenGL ES 3.0.
For contexts without a back buffer, this extension makes FRONT the
default read buffer. ES 3.0 instead calls it BACK.
How can this be harmonized?
RESOLVED: Update the specification to match ES 3.0 behavior. This
introduces a backwards incompatibility, but few applications are
expected to be affected. In the EGL ecosystem where ES 2.0 is
prevalent, only pixmaps have no backbuffer and their usage remains
limited.
So the extension has retroactively been modified to bring it into line with what was agreed for the core spec.
Related
I am trying to make use of some ES 3.1 features, and it's unclear if this is supported:
I notice that there is an OpenGL ES 3.1 header in the emscripten repository which defines some of the functions I'm looking for, and I can include them successfully in my project. However, they are not available when I try to link:
error: undefined symbol: glDispatchCompute (referenced by top-level compiled C/C++ code)
warning: _glDispatchCompute may need to be added to EXPORTED_FUNCTIONS if it arrives from a system library
The documentation says that OpenGL ES3 is supported if I specify -s FULL_ES3=1 (which I am doing).
Since there are headers for it, is this functionality available? If so, how do I enable support for it? (Does it require manually loading extensions or enabling experimental support in emscripten, for example?)
First thing to realize, is that decent Browsers currently implement only WebGL 1.0 (based on OpenGL ES 2.0) and WebGL 2.0 (based on OpenGL ES 3.0). So that Emscripten SDK may implement only features presented by WebGL 2.0 + extensions, and, unfortunately, Compute Shaders are not presented in any way in WebGL (and there are no more plans to add this functionality in future).
By the way, WebGL 2.0 support has been added to Safari 15 (iOS 15) only this (2021) year, so that even OpenGL ES 2.0 will not work on all devices.
I notice that there is an OpenGL ES 3.1 header in the emscripten
The extra <GLES3\gl3*.h> headers are there not because Emscripten implements all of them, but because this is a common way to distribute OpenGL ES 3.x headers that existing applications may rely on to conditionally load / use OpenGL ES 3.x functions at runtime, when they available.
The documentation says that OpenGL ES3 is supported if I specify -s FULL_ES3=1 (which I am doing).
I think that documentation is more or less clear about what FULL_ES3=1 adds:
To enable OpenGL ES 3.0 emulation, specify the emcc option -s FULL_ES3=1 when linking the final executable (.js/.html) of the project.
This adds emulation for mapping memory blocks to client side memory.
There is no word about OpenGL ES 3.1 here, nor a reason to expect a wonder from Emscripten, as I can barely imagine a reasonable hardware-accelerated way to emulate things like Compute Shaders on top of OpenGL ES 3.0. And software emulation would be certainly of no use for most applications.
WebGPU 1.0 that promised to appear mid '2022, is more capable than WebGL 2.0. So that WebGPU developers already see that at one time native WebGL 2.0 implementation in the Browsers could be replaced by a WebAssembly module implementing this legacy API on top of WebGPU - but in some very distant future. The same could also bring OpenGL ES 3.1/3.2 features to Emscripten SDK - at least theoretically, if somebody will be still interested to work on this.
I'm building an application with OpenGL ES 2.0 and SDL2 for Android. Is SDL_GL_GetProcAddress working with OpenGL ES 2.0 on Android? Also i know OpenGL ES 2.0 is a subset of OpenGL, so with this method can it run on desktop systems too?
From a quick browse of the SDL repository it should be.
SDL_video.c defines the implementation of SDL_GL_GetProcAddress simply to check that you've started OpenGL and then to call _this->GL_GetProcAddress, where _this is a global instance of the video driver.
SDL_androidvideo.c sets its GL_GetProcAddress to be Android_GLES_GetProcAddress, which is a preprocessor substitution for SDL_EGL_GetProcAddress.
So, so far: if you call SDL_GL_GetProcAddress, you'll get through to SDL_EGL_GetProcAddress.
SDL_egl.c implements SDL_EGL_GetProcAddress but declines to call eglGetProcAddress on Android. This looks like it's probably an error — the reason given is this bug but the status for that bug switched to 'Released' in June 2013, which I believe means that this has been fixed in Android for more than three years.
That aside, the fallback is to use SDL_LoadFunction, first with the direct function name, then with it proceeded by an underscore provided it's short enough to fit into the statically-declared buffer. Which this one is.
(so, caveat: SDL_GL_GetProcAddress is definitely not thread-safe, even if you've taken appropriate share group steps to use multiple GL contexts, but if you're writing an SDL program you probably don't care)
Android should be using the dlopen version of SDL_sysloadso so it looks like SDL_LoadFunction is implemented directly as a call to dlsym. Which has no issues that I'm aware of under Android.
So, in summary: yes, that call should work. It'll use the platform-specific dynamic library loader rather than the EGL call though it probably doesn't need to, but that's just an implementation detail.
The CUDA FAQ says:
CUDA defines vector types such as float4, but doesn't include any
operators on them by default. However, you can define your own
operators using standard C++. The CUDA SDK includes a header
"cutil_math.h" that defines some common operations on the vector
types.
However I can not find this using CUDA SDK 5.0. Has it been removed/renamed?
I've found a version of the header here. How is it related to the one that's supposed to come with SDK?
The cutil functionality was deleted from the CUDA 5.0 Samples (i.e. the "SDK"). You can still download a previous SDK and compile it under CUDA 5, you should then have everything that came with previous SDK's.
The official notice was given by nvidia in the CUDA 5.0 release notes (CUDA_Samples_Release_Notes.pdf, installed with the samples). As to why, I imagine that the nvidia sentiment regarding cutil probably was something like what is expressed here "not suitable for use in a real application. It is completely unsupported" but people were using it in real applications. So one way to try put a stop to that is to delete it, I suppose. That's just speculation.
Note some additional useful info provided in the release notes:
CUTIL has been removed with the CUDA Samples in CUDA 5.0, and replaced
with helper functions found in NVIDIA_CUDA-5.0/common/inc:
helper_cuda.h, helper_cuda_gl.h, helper_cuda_drvapi.h,
helper_functions.h, helper_image.h, helper_math.h, helper_string.h,
helper_timer.h
These helper functions handle CUDA device
initialization, CUDA error checking, string parsing, image file
loading and saving, and timing functions. The CUDA Samples projects no
longer have references and dependencies to CUTIL, and now use these
helper functions going forward.
So you may find useful functions in some of those header files.
in latest SDK helper_math.h implement most of required operator, however its still missing logical operators like OR or AND
GLES2 does not support glPush*/glPop*. Does anyone know if there is an implementation of a state stack for OpenGL ES 2.0? Any solution to my problem is welcome.
glPushAttrib/glPopAttrib managed fixed-function state that was not moved over from the older versions of OpenGL. Programmable shaders replaced all of the fixed-function functionality in GLES, and newer versions of OpenGL.
State is now something you manage yourself via inputs to shader programs.
If you need a quick solution, you may be interested in this library (Github repository). It only emulates small subset of OpenGL 1.x, glPush*/glPop* included. Note, that mentioned project is still very much WIP, so don't expect everything to work out of box.
There're set of definition called "egl" in GLES 1.1: http://www.khronos.org/opengles/sdk/1.1/docs/man/
It's the "Native Platform Graphics Interface Layer":
http://www.khronos.org/opengles/
However, they're not in GLES 2.0: http://www.khronos.org/opengles/sdk/docs/man/
So I got some questions:
Is this a separated spec from GLES? Or a part of GLES1.1?
Where did they gone (in 2.0)? Or still exist (in 2.0)?
Where is the manual (guide)?
Should I manage eglContext in GLES 2.0 too?
EGL is a separate spec from OpenGL ES, it can manage contexts for OpenGL ES 1.0/1.1 and OpenGL ES 2.0 (and algo OpenVG), so it's not really gone.
The latest spec is here.
I think eonil was premature to accept the answer. Unless I am consistently missing things at the "latest spec" Valdenegro provided. For what I find there is that in order to choose the client API for the current context, one must use EGL_CONTEXT_CLIENT_VERSION, which is itself supported only in EGL 1.2, which is not on any Android phone I have seen: they are all EGL 1.1.
In EGL 1.0 or 1.1, you can only use the default client version, which is openGL ES.