I have a question regarding [glPushMatrix], together with the matrix transformations, and OpenGL ES. The GLSL guide says that under OpenGL ES, the matrices have to be computed:
However, when developing applications in modern versions of OpenGL and
OpenGL ES or in WebGL, the model matrix has to be computed.
and
In some versions of OpenGL (ES), a built-in uniform variable
gl_ModelViewMatrix is available in the vertex shader
As I understood, gl_ModelViewMatrix is not available under all OpenGL ES specifications. So, are the functions like glMatrixMode, glRotate, ..., still valid there? Can I use them to calculate the model matrix? If not, how to handle those transformation matrices?
First: You shouldn't use the matrix manipulation functions in regular OpenGL as well. In old versions they're just to inflexible and also redundant, and in newer versions they've been removed entirely.
Second: The source you're mentioning is a Wikibook which means it's not a authorative source. In the case of this Wikibook it's been written to accomodate for all versions of GLSL, and some of them, mainly for OpenGL-2.1 have those variables.
You deal with those matrices by calculating them yourself (no, this is not slower, OpenGL's matrix stuff was not GPU accelerated) and pass them to OpenGL either by glLoadMatrix/glMultMatrix (old versions of OpenGL) or a shader uniform.
If you're planning on doing this in Android, then take a look at this.
http://developer.android.com/reference/android/opengl/Matrix.html
It has functions to setup view, frustum, transformation matrices as well as some matrix operations.
Related
I am new to OpenGL ES. I am currently reading docs about 2.0 version of OpenGL ES. I have a triangular 2D mesh, a 2D RGB texture and i need to compute, for every triangle, the following quantities:
where N is the number of pixels of a given triangle. This quantities are needed for further CPU processing. The idea would be to use GPU rasterization to sum quantities over triangles. I am not able to see how to do this with OpenGL ES 2.0 (which is the most popular version among android devices). Another question i have is: is it possible to do this type of computation with OpenGL ES 3.0?
I am not able to see how to do this with OpenGL ES 2.0
You can't; the API simply isn't designed to do it.
Is it possible to do this type of computation with OpenGL ES 3.0?
In the general case, no. If you can use OpenGL ES 3.1 and if you can control the input geometry then a viable algorithm would be:
Add a vertex attribute which is the primitive ID for each triangle in the mesh (we can use as an array index).
Allocate an atomics buffer GL_ATOMIC_COUNTER_BUFFER with one atomic per primitive, which is pre-zeroed.
In the fragment shader increment the atomic corresponding the current primitive (loaded from the vertex attribute).
Performance is likely to be pretty horrible though - atomics generally suck for most GPU implementations.
Prior to the introduction of compute shaders in OpenGL ES 3.1, what techniques or tricks can be used to perform general computations on the GPU? e.g. I am animating a particle system and I'd like to farm out some work to the GPU. Can I make use of vertex shaders with "fake" vertex data somehow?
EDIT:
I found this example which looks helpful: http://ciechanowski.me/blog/2014/01/05/exploring_gpgpu_on_ios/
You can use vertex shaders and transform feedback to output the results to an application accessible buffer. The main downside is that you can't have cross-thread data sharing between "work items" like you can with a compute shader, so they are not 100% equivalent.
I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}
Obviously new to OpenGL I was wondering if it was possible to use a VBO with multiple normal vectors per vertice. My current vertex array order looks like:
j = [x,y,z,r,g,b,a,n1x,n1y,n1z,n2x,n2y,n2z,n3x,n3y,n3z....]
This method requires the shaders to distinguish which normal vector to use, which is causing the problem. Any suggestions would be great.
Also looking for tutorials on using multiple IBO's and VBO's, most tutorials only seem to use one.
You might make it work interleaved by supplying different strides to glVertexAttribPointer(), which is how it handles different amounts of data for the vertex and texture coordinates, for example. Or, you could use separate VBOs for the vertex, normal and texture coordinates, rather than interleave them.
You can find better examples of using VBOs in the PowerVR SDK, which is a free download at:
http://www.imgtec.com/powervr/insider/sdkdownloads/
In OpenGL ES 1.x, one could do glTranslate first, and then glRotate, to modify where the center of rotation is located (i.e. rotate around given point). As far as I understand, in OpenGL ES 2.0 matrix computations are done on CPU side. I am using IwGeom (from Marmalade SDK) – a typical (probably) matrix package. From documentation:
Matrices in IwGeom are effectively in 4x3 format, comprising a 3x3
rotation, and a 3-component vector translation.
I find it hard to obtain the same effect using this method. The translation is always applied after the rotation. More, in Marmalade, one also sets Model matrix:
IwGxSetModelMatrix( &modelMatrix );
And, apparently, rotation and translation is also applied in one order: a) rotation, b) translation.
How to obtain the OpenGL ES 1.x effect?
Marmalades IwGX wraps OpenGL and it is more similar to GLES 1.0 then GLES 2.0 as it does not requires shaders.
glTranslate and glRotate modifying view matrix.
You may replace with
CIwFMat viewMat1 = IwGxGetModelMatrix();
CIwFMat rot; rot.SetIdentity(); rot.SetRotZ(.....); // or other matrix rotation function
CIwFMat viewMat2 = viewMat1;
viewMat2.PostMult(rot); // or viewMat2.PreMult(rot);
IwGxSetModelMatrix(viewMat2);
// Draw Something
IwGxSetModelMatrix(&viewMat1);
If you use GLES 2.0 then matrix might be computed in vertex shader as well. That might be faster then CPU. CPU with NEON instructions have similar performance on iPhone 4S