OpenGL es 2.0 glDrawElements Implementation - opengl-es

OpenGLes 2.0 does not have support for special built-in fragment shader variable called gl_PrimitiveID. I tried to simulate this variable by associating a unique attribute with each of the vertices forming a triangle. But this causes problem when there is sharing of vertices between two or more triangles. It would effectively require duplication of shared vertices for this technique to work increasing memory usage for complex scenes.
I am thinking to make changes in the openGLes library itself so as to be able to maintain gl_PrimitiveID variable internally. I am using mesa 3d library for editing the source code of openGLes 2.0. But I am not able to locate the implementation of glDrawElements to serve my purpose. This function has GL_APIENTRY flag associated with its declaration.
Any suggestions?

OpenGL or OpenGL-ES are not libraries. They're system level APIs, which implementation is buried deep within the GPU's driver. You can't make changes to them, without messing around in a GPU driver's code. And then everybody who used your program had to use that driver. Which in the case of OpenGL-ES will be impossible to carry out, because most OpenGL-ES implementations live in devices that build walled gardens around applications. Also the driver's source codes are usually kept a secret.

Related

Submitting integer to shader without using uniforms?

I plan to eliminate all glUniform calls from my GLSL shaders in order to save costs in state switching. For that purpose, I plan to use an UBO that is bound to the shader permanently. Different draw calls use different parts of the UBO (it's basically an array). In order to tell the draw call which entry to use, I have to submit an integer to the vertex/fragment shaders. The problem is, that on the system I have to use even casting a single glUniform call will cause an expensive state update, so I cannot use glUniform at all.
Do you know a solution that will work on GLES 3.1 and one that will work on GLES 2?
GLES doesn't have glMulti* calls yet and base vertex only from 3.2 upwards as far as I know. And adding another vertex attribute may be costly.

Why no access to texture lod in fragment shader

I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}

WebGL differs from OpenGL preprocessor on same graphics stack

I just come upon an interesting effect by Chrome's use of the GLSL compiler. The statement
#define addf(index) if(weights[i+index]>0.) r+=weights[i+index]*f##index(p);
does not compile stating
preprocessor command must not be preceded by any other statement in that line
It seems that the ## syntax is unsupported.
However, on the same platform (eg. Linux 64bit, Nvidia GPU) the same shader compiles and runs fine. Why this? I thought the shader compiler is part of the GPUs driver stack and would be used in both cases. So why this different experience?
Actually WebGL is also quoted as "OpenGL ES 2.0 for the Web", so there are some differences to OpenGL.
The WebGL spec ( https://www.khronos.org/registry/webgl/specs/1.0/ ) tells us:
"A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00."
Looking into the GLSL ES 1.0 spec ( https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf ) I found:
Section 3.4 defines the preprocessor and also states "There are no number sign based operators (no #, ##, ##, etc.), nor is there a sizeof operator."
So whatever the browser's implementation does internally, it follows the standard :)
WebGL implementations needs to conform the WebGL specifications. Many restrictions are needed for security issues. The ## issue is not, but anyway not correct by the WebGL specs.
To conform, they can either use a graphics stack that fully conform (for example by providing a wrapper to an unextended OpenGL ES profile if the driver exhibits those) or by prechecking the GLSL shader code and WebGL state itself to ensure conformity before passing the comamnds to some full OpenGL implementation.
So the WebGL behaviour may differ from the native OpenGL behaviour on the same machine.
That's because on Windows, Chrome does not use the OpenGL driver by default. It uses Direct3D, and the translation from OpenGL to Direct3D is done by the ANGLE project.
ANGLE has its own shader validator and preprocessor. And hence you can see differences between Windows and other operating systems even though you're using the same hardware. ANGLE was created because on Windows the Direct3D support is typically much better than the OpenGL support, and because it allows more control over the implementation and its conformance.

Is ARB_texture_multisample available for OpenGL ES 2.0?

Basically, what is needed to perform multisampled deferred shading.
To expand a bit: I'm not actually all that interested in Deferred shading per se, but what is of key importance is allowing the storage and retrieval of sub-pixel sample data for antialiasing purposes: I need to be able to control the resolve, or at least do some operations before resolving multisampled buffers.
All the major extensions for OpenGL ES are listed here: http://www.khronos.org/registry/gles/
And as far as I know currently no major OpenGL ES implmentations does not provide individual sample resolving using OpenGL ES. Only thing you can do is to copy multisampled texture to normal one, and the access "normal" samples.

GLSL PointSprite for particle system

I'm using an ParticleSystem with PointSprites (inspired by the Cocos2D Source). But I wonder how to rebuild the functionality for OpenGL ES 2.0
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glPointSizePointerOES(GL_FLOAT,sizeof(PointSprite),(GLvoid*) (sizeof(GL_FLOAT)*2));
glDisableClientState(GL_POINT_SIZE_ARRAY_OES);
glDisable(GL_POINT_SPRITE_OES);
these generate BAD_ACCESS when using an OpenGL ES 2.0 context.
Should I simply go with 2 TRIANGLES per PointSprite? But thats probably not very efficent (overhead for extra vertexes).
EDIT:
So, my new problem with the suggested solution from:
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
is a possibility to pass many different sizes in an batch call. I thought of using an Attribute instead of an Uniform, but then I would need to pass always an PointSize to my shaders - even if I'm not drawing GL_POINTS. So, maybe a second shader (a shader only for GL_POINTS)?! I'm not aware of the overhead for switching shaders every frame in the draw routine (because if the particle system is used, I want naturally also render regular GL_TRIANGLES without an pointSize)... Any ideas on this?
So doing the thing here as I already commented here is what you need: https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
And for which approach to go, I can either tell you to use different shaders for different types of drawables in your application or just another boolean uniform in your shader and enable and disable changing the gl_PointSize through your shader code. It's usually up to you. What you need to keep in mind is changing the shader program is one of the most time costly operations so doing the drawing of same type of objects in a batch will be better in that case. I'm not really sure if using an if statement in your shader code will give a huge performance impact.

Resources