I'm working with a codebase that supports OpenGL and OpenGL ES.
Is there any GLES define I can use to write conditional code in C? Something like:
#include <GLES/gl.h>
#ifdef __GLES
//
#else
//
#endif
In GLSL shaders
For conditional compilation of GLSL shaders, there is GL_ES. The GLSL ES 1.00 specification defines under section 3.4 "Preprocessor":
The following predefined macros are available
__LINE__
__FILE__
__VERSION__
GL_ES
GL_ES will be defined and set to 1. This is not true for the non-ES OpenGL Shading Language, so it can
be used to do a compile time test to see whether a shader is running on ES.
So you can check for GL_ES in shaders.
In host/C code
If you want to know whether any particular C file is being compiled in a project for a particular version of OpenGL ES, then you can use the defines from GLES/gl.h, GLES2/gl2.h, GLES3/gl3.h, GLES3/gl31.h or GLES3/gl32.h, which are: GL_VERSION_ES_CM_1_0, GL_ES_VERSION_2_0, GL_ES_VERSION_3_0, GL_ES_VERSION_3_1 and GL_ES_VERSION_3_2, respectively. Starting with GLES2/gl2.h and anything above, you will always have the defines of the previous versions (up to GLES 2.0) also defined. So when you include GLES3/gl31.h then you will find GL_ES_VERSION_2_0, GL_ES_VERSION_3_0 and GL_ES_VERSION_3_1 to be defined.
However, you make the decision in your application whether to include the OpenGL Desktop header files or the OpenGL ES header files and you could use that same condition for conditionally compiling code that relies on you compiling for either OpenGL Desktop or OpenGL ES. So, you really don't need any pre-defined defines because you make the decision yourself earlier in the program about which header you include.
Related
I'm trying to build the source code provided for the book Mathematics for 3D Game Programming and Computer Graphics. I've linked the OpenGL.Framework in "Build Phases" and included (not sure which one I need)
#include <OpenGL/OpenGL.h>
#include <OpenGL/gl.h>
Now I get
Use of undeclared identifier 'Sqrt'
Use of undeclared identifier 'InverseSqrt'
Use of undeclared identifier 'fabs'
I'm guessing these have to do with not setting up OpenGL properly?
The author mentions using GLSL in the book but doesn't go into the details. I'm new to OpenGL.
not a xcode coder but in C++ fabs,sqrt are in math.h and if InverseSqrt means sqr so you can try to do a fix like this:
#include <math.h>
#define Sqrt sqrt
#define InverseSqrt(x) (x*x)
some environments want this instead:
#include <math>
#define Sqrt sqrt
#define InverseSqrt(x) (x*x)
However as mentioned in the comment those functions have nothing to do with OpenGL so they are most likely used in some lib you have included/linked whatever ... and forgot to include some header you should ...
[Edit1]
If InverseSqrt means 1/sqrt(x) as derhass suggested (english terminology feels sometimes weird) then use
#define InverseSqrt(x) (1/sqrt(x))
inversesqrt (no caps) is a built in function in GLSL, while fabs is a function in C, and sqrt exists in both languages. Xcode can compile C/C++, but you must write code to compile GLSL.
The recent Android NDK r9 unveils support of OpenGL ES 3.0. There is an example samples/gles3jni which demonstrates how to use OpenGL ES 3.0 from JNI/native code. The sample can be built two different ways:
Compatible with API level 11 and later
Require API level 18 or later.
Both versions include an OpenGL ES 2.0 fallback path for devices that don't support OpenGL ES 3.0. However, the in the first case example is statically linked against OpenGL ES 2 using LOCAL_LDLIBS option -lGLESv2. In the second case it is statically linked with GLES 3 the same way.
The initialization goes like this:
const char* versionStr = (const char*)glGetString(GL_VERSION);
if (strstr(versionStr, "OpenGL ES 3.") && gl3stubInit()) {
g_renderer = createES3Renderer();
} else if (strstr(versionStr, "OpenGL ES 2.")) {
g_renderer = createES2Renderer();
}
How can I omit the static linking at all and load GLES 2 or 3 dynamically from .so?
On API 18 and later, you can use eglGetProcAddress to dynamically query ES 2.0 functions, just like gl3stub.c in the sample does for ES 3.0 functions. Before API 18, you need to do something like this:
// global scope, probably a header file
extern GL_APICALL const GLubyte* (* GL_APIENTRY glGetString) (GLenum name);
extern GL_APICALL GLenum (* GL_APIENTRY glGetError) (void);
...
// initialization code
void* libGLESv2 = dlopen("libGLESv2.so", RTLD_GLOBAL | RTLD_NOW);
glGetString = dlsym(libGLESv2, "glGetString");
glGetError = dlsym(libGLESv2, "glGetError");
...
Add error-checking on the dlopen and dlsym calls, of course.
I'm not sure why you'd do this, though. libGLESv2.so is present on any version of Android you're likely to want to target, there shouldn't be any downside to linking against it.
I didn't have a 4.3 device to test it, but my understanding is that the 1st method actually uses GLES 3 if available, so it is equivalent to dynamically linking libGLESv3.
Dynamic linking with libglesxx.so is also possible, but then you don't have shortcuts, and have to dlsym all functions that you use. It's not worth it, IMHO.
I am a fresher of OpenGL ES2.0 and GLSL, and I want to use shaders to process Images. When I coded in Xcode, I used built-in variables such as gl_Normal, gl_Vertex directly and did not declared them at the beginning of shaders. At last, I got a error message:
Use of undeclared identifier gl_Normal. why?
Use of undeclared identifier gl_Normal. why?
In OpenGL-ES 2, and following in its footsteps OpenGL-3 core, there are no longer predefined shader input variables. OpenGL-4 did even away with predefined shader outputs.
Instead you're expected to define your own inputs and outputs. Each input or output variable as assigned a so called location. Either implicitly by OpenGL, and retrieveable by glGetAttribLocation, or explicitly by the programmer using the location storage qualifier attribute or glBindAttribLocation function. Outputs are similarily assigned by fragment data locations.
Is there any way to tell within the source code for a shader that the shader is being compiled for OpenGL ES? I want to be able to define the version using the #version preprocessor directive to be 100 for OpenGL ES (so that the shader compiles for OpenGL ES 2.0), but is version 110 for OpenGL 2.1).
Is the best way to do this to place the #version as a separate string which is fed in at the application level, or is there a way to to do this within the shader?
Another useful, related thing to be able to do would be to say something like
#if version == 100 compile this code, else compile this code. Is this possible within GLSL?
Thanks.
Prepending #version from the main program as PeterT suggested in the above comment is the only way that will work. Being able to do this (and being able to define constants without having something like a -D compiler switch available) is the main intent behind glShaderSource taking an array of pointers rather than a simple char*.
The GLSL specification (chapter 3.3) requires that #version be the first thing in a shader source, except for whitespace and comments.
Thus, no such thing as
#ifdef foo
#version 123
#endif
is valid, and no such thing will compile (unless the shader compiler is overly permissive, i.e. broken).
About your second question, conditional compilation certainly works and using it in the way you intend to do is a good thing.
This is also related information:
http://blog.beuc.net/posts/OpenGL_ES_2.0_using_Android_NDK/
You can, for example:
#ifdef GL_ES
precision mediump float;
#endif
OpenGL ES 2.0 implementations are required to have a GL_ES macro predefined in the shaders.
I'm trying to use non-square matrices in my GLSL shader but when I compile I get a syntax error.
My shader code using:
uniform mat4 my_mat;
compiles just fine.
But if I change it to:
uniform mat4x3 my_mat;
I get
ERROR: 0:5: 'mat4x3' : syntax error syntax error
I get a similar error for
uniform mat4x4 my_mat;
If I print my GL_VERSION and GL_SHADING_LANGUAGE_VERSION I get:
GL_VERSION: 2.1 NVIDIA-1.6.36
GL_SHADING_LANGUAGE_VERSION: 1.20
I'm compiling and running my OpenGL on a Mac OS X 10.6 MacBook Pro. According to this NVidia document and others, GLSL 1.20 and GL 2.1 should encompass support of non-square matrices and this syntax. Is there another catch? Or another way to troubleshoot why I get syntax errors?
If I place
#version 120
At the top of my shader code, the problem goes away. According to the same document listed in the question shader source without the version compiler option will compile "as before" which I guess means that they won't.