What's the purpose of glNormal3f in OpenGL ES? - opengl-es

What's the purpose of glNormal3f in OpenGL ES? If there is no immediate mode available in OpenGL ES, for what would I use glNormal3f? Example code would be nice.

I think it's for the same reason that there is a glColor function. If the normal of your whole geometry is the same for all vertices you could specify it with glNormal before calling glDrawArrays/glDrawElements.

The only reason I can think of is that it is there to support efficiently expressing surfaces where many vertices share the same normal. With the arrays-based approach, you'd have to create an array with the same value repeated for each vertex, which wastes memory.
I find it curious that the manual page (OpenGL ES 1.1, there) doesn't even mention it. I found one iPhone programming tutorial (PDF) that even claimed glNormal() was not there anymore.

OpenGL ES 1.1 does mention it but yes that's an error there in the iPhone programming tutorial.

You are not supposed to use these functions anymore. Stick to the glXXXXArray(). I suspect that they are just left overs hanging in there to make OpenGL to OpenGL ES transfer easier.

In OpenGL|ES glNormal3f() is not a deprecated function,
because it is useful for rendering flat 2D shapes in the {X,Y} plane:
instead of using:
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer( GL_FLOAT, 0, vbuf + normal_offset );
it is simpler and it requires less VBO memory to call:
glNormal3f( 0, 0, 1 );

Related

Why no access to texture lod in fragment shader

I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}

glPushMatrix and OpenGL ES

I have a question regarding [glPushMatrix], together with the matrix transformations, and OpenGL ES. The GLSL guide says that under OpenGL ES, the matrices have to be computed:
However, when developing applications in modern versions of OpenGL and
OpenGL ES or in WebGL, the model matrix has to be computed.
and
In some versions of OpenGL (ES), a built-in uniform variable
gl_ModelViewMatrix is available in the vertex shader
As I understood, gl_ModelViewMatrix is not available under all OpenGL ES specifications. So, are the functions like glMatrixMode, glRotate, ..., still valid there? Can I use them to calculate the model matrix? If not, how to handle those transformation matrices?
First: You shouldn't use the matrix manipulation functions in regular OpenGL as well. In old versions they're just to inflexible and also redundant, and in newer versions they've been removed entirely.
Second: The source you're mentioning is a Wikibook which means it's not a authorative source. In the case of this Wikibook it's been written to accomodate for all versions of GLSL, and some of them, mainly for OpenGL-2.1 have those variables.
You deal with those matrices by calculating them yourself (no, this is not slower, OpenGL's matrix stuff was not GPU accelerated) and pass them to OpenGL either by glLoadMatrix/glMultMatrix (old versions of OpenGL) or a shader uniform.
If you're planning on doing this in Android, then take a look at this.
http://developer.android.com/reference/android/opengl/Matrix.html
It has functions to setup view, frustum, transformation matrices as well as some matrix operations.

Is ARB_texture_multisample available for OpenGL ES 2.0?

Basically, what is needed to perform multisampled deferred shading.
To expand a bit: I'm not actually all that interested in Deferred shading per se, but what is of key importance is allowing the storage and retrieval of sub-pixel sample data for antialiasing purposes: I need to be able to control the resolve, or at least do some operations before resolving multisampled buffers.
All the major extensions for OpenGL ES are listed here: http://www.khronos.org/registry/gles/
And as far as I know currently no major OpenGL ES implmentations does not provide individual sample resolving using OpenGL ES. Only thing you can do is to copy multisampled texture to normal one, and the access "normal" samples.

GLSL PointSprite for particle system

I'm using an ParticleSystem with PointSprites (inspired by the Cocos2D Source). But I wonder how to rebuild the functionality for OpenGL ES 2.0
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glPointSizePointerOES(GL_FLOAT,sizeof(PointSprite),(GLvoid*) (sizeof(GL_FLOAT)*2));
glDisableClientState(GL_POINT_SIZE_ARRAY_OES);
glDisable(GL_POINT_SPRITE_OES);
these generate BAD_ACCESS when using an OpenGL ES 2.0 context.
Should I simply go with 2 TRIANGLES per PointSprite? But thats probably not very efficent (overhead for extra vertexes).
EDIT:
So, my new problem with the suggested solution from:
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
is a possibility to pass many different sizes in an batch call. I thought of using an Attribute instead of an Uniform, but then I would need to pass always an PointSize to my shaders - even if I'm not drawing GL_POINTS. So, maybe a second shader (a shader only for GL_POINTS)?! I'm not aware of the overhead for switching shaders every frame in the draw routine (because if the particle system is used, I want naturally also render regular GL_TRIANGLES without an pointSize)... Any ideas on this?
So doing the thing here as I already commented here is what you need: https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
And for which approach to go, I can either tell you to use different shaders for different types of drawables in your application or just another boolean uniform in your shader and enable and disable changing the gl_PointSize through your shader code. It's usually up to you. What you need to keep in mind is changing the shader program is one of the most time costly operations so doing the drawing of same type of objects in a batch will be better in that case. I'm not really sure if using an if statement in your shader code will give a huge performance impact.

OpenGL ES 1.1 to 2.0 a major change?

I'm creating an iPhone app with cocos2d and I'm trying to make use of the following OpenGL ES 1.1 code. However, I'm not good with OpenGL and my app makes use of OpenGL ES 2.0 so I need to convert it.
Thus I was wondering, how difficult would it be to convert the following code from ES 1.1 to ES 2.0? Is there some source that could tell me which methods need replacing etc?
-(void) draw
{
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
glColor4ub(_color.r, _color.g, _color.b, _opacity);
glLineWidth(1.0f);
glEnable(GL_LINE_SMOOTH);
if (_opacity != 255)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//non-GL code here
if (_opacity != 255)
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
It will not be that easy if you're not so fit in OpenGL.
OpenGL ES 2.0 doesn't have a fixed-function pipeline anymore. This means you have to manage vertex transformations, lighting, texturing and the like all yourself using GLSL vertex and fragment shaders. You also have to keep track of the transformation matrices yourself, there are no glMatrixMode, glPushMatrix, glTranslate, ... anymore.
There are also no buitlin vertex attributes anymore (like glVertex, glColor, ...). So these functions, along with the corresponding array functions (like glVertexPointer, glColorPointer, ...) and gl(En/Dis)ableClientState, have been reomved, too. Instead you need the generic vertex attribute functions (glVertexAttrib, glVertexAttribPointer and gl(En/Dis)ableVertexAttribArray, which behave similarly) together with a corresponding vertex shader to give these attributes their correct meaning.
I suggest you look into a good OpenGL ES 2.0 tutorial or book, as porting from 1.1 to 2.0 is really a major change, at least if you never have heard anything about shaders.

Resources