How to sum triangle pixels with OpenGL ES - opengl-es

I am new to OpenGL ES. I am currently reading docs about 2.0 version of OpenGL ES. I have a triangular 2D mesh, a 2D RGB texture and i need to compute, for every triangle, the following quantities:
where N is the number of pixels of a given triangle. This quantities are needed for further CPU processing. The idea would be to use GPU rasterization to sum quantities over triangles. I am not able to see how to do this with OpenGL ES 2.0 (which is the most popular version among android devices). Another question i have is: is it possible to do this type of computation with OpenGL ES 3.0?

I am not able to see how to do this with OpenGL ES 2.0
You can't; the API simply isn't designed to do it.
Is it possible to do this type of computation with OpenGL ES 3.0?
In the general case, no. If you can use OpenGL ES 3.1 and if you can control the input geometry then a viable algorithm would be:
Add a vertex attribute which is the primitive ID for each triangle in the mesh (we can use as an array index).
Allocate an atomics buffer GL_ATOMIC_COUNTER_BUFFER with one atomic per primitive, which is pre-zeroed.
In the fragment shader increment the atomic corresponding the current primitive (loaded from the vertex attribute).
Performance is likely to be pretty horrible though - atomics generally suck for most GPU implementations.

Related

In opengl ES can I use a vertex buffer array buffer etc for shader shared matrices?

As OpenGL ES does not support shared "uniform blocks" I was wondering if there is a way I can put matrices that can be referenced by a number of different shaders, a simple example would be a worldToViewport or worldToEye which would not change for an entire frame and which all shaders would reference. I saw one post where one uses 3 or 4 dot calls on a vertex to transform it from 4 "column vectors", but wondering if there is a way to assign the buffer data to a "mat4" in the shader.
Ah yes the need for this is webGL which at the moment it seems only to support openGLES 2.0.
I wonder if it supports indexed attribute buffers as I assume they don't need to be any specified size relative to the size of the position vertex array.
Then if one can use a hard coded or calculated index into the attribute buffer ( in the shader ) and if one can bind more than one attribute buffer at a time, and access all "bound to the shader" buffers simultaneously in a shader ...
I see if all true might work. I need a good language/architecture reference on shaders as I am somewhat new to shader programming as I I'm trying to deign a wall without knowing the shapes of the bricks :)
Vertex attributes are per-vertex, so there is no way so share vertex attributes amongst multiple vertices.
OpenGL ES 2.0 upwards has CPU-side uniforms, which must be uploaded individually from the CPU at draw time. Uniforms belong to the program object, so for uniforms which are constant for a frame you only have to modify each program once, so the cost isn't necessarily proportional to draw count.
OpenGL ES 3.0 onwards has Uniform Buffer Objects (UBOs) which allow you to load uniforms from a buffer in memory.
I'm not sure what you mean by "doesn't support shared uniform blocks", as that's pretty much what a UBO is, albeit it won't work on older hardware which only supports OpenGL ES 2.x.

Create OpenGL ES texture with two float16 values (beginner)

I need to create a texture for an OpenGL ES 2.0 application with the following specs:
Each pixel has two components (lets call them r and g in the fragment shader).
Each pixel component is a 16 bit float.
That means every pixel in the texture has 4 bytes (2 bytes / 16 bit for each component).
The fragment shader should be able to sample the texture as 2 float16 components.
All formats must be supported on OpenGL ES 2.0 and as efficient as possible.
How would the appropriate glTexImage2D call look?
Regards
Neither floating point textures nor floating point render targets are supported in OpenGL ES 2.x. The short answer is therefore "you can't do what you are trying to do", at least not natively.
You can emulate higher precision by packing pairs of values into a RGBA8 texture or render target, e.g. the pair of RG values is one value, and BA is the other, but you'll have to pack/unpack the component 8-bit unorms yourself in shader code. This is quite a common solution in deferred rendering G-buffers for example, but can be relatively expensive on some of the lower-end mobile GPU parts (given it's basically just overhead, rather than useful rendering).

Dealing with lack of glDrawElementsBaseVertex in OpenGL ES

I'm working on porting a Direct3D terrain renderer to Android and just learned that OpenGL did not have an equivalent to the BaseVertexIndex parameter of DrawIndexedPrimitive until version 3.2 introduced the glDrawElementsBaseVertex method. That method is not available in OpenGL ES.
The D3D terrain renderer uses a single, large vertex buffer to hold the active terrain patches in an LRU fashion. The same 16-bit indices are used to draw each patch.
Given the lack of a base vertex index offset in OpenGL ES, I can't use the same indices to draw each patch. Furthermore, the buffer is too large for 16-bit absolute indices. The alternatives I've identified are:
Use one VBO or vertex array per patch.
Use 32-bit indices and generate new indices for every block in the VBO.
Stop using indexing and replicate vertices as needed. Note that most vertices appear in six triangles. Switching to triangle strips could help, but still doubles the number of vertices.
None of these seem very efficient compared to what was possible in D3D. Are there any other alternatives?
You didn't specify the exact data layout of your VBOs, but if your base vertex offset is not negative you can apply an offset when binding the VBO to the vertex attribute (glVertexAttribPointer).

Is there and alternatice for gllinewidth in opengl es 2.0

glLineWidth is not supported by openGL ES 2.0. Is there any alternative to achieve the same in 2.0?
Render triangle strip or triangles instead. This will consume 2x more vertex memory, but should faster than lines on modern hw.
If You can target ES3 You can consider using instanced vertex arrays to lower vertex mem usage.

OpenGL ES 2.0 (or Marmalade SDK), and effect similar to order of "glRotate()", "glTranslation()" in OpenGL ES 1.x

In OpenGL ES 1.x, one could do glTranslate first, and then glRotate, to modify where the center of rotation is located (i.e. rotate around given point). As far as I understand, in OpenGL ES 2.0 matrix computations are done on CPU side. I am using IwGeom (from Marmalade SDK) – a typical (probably) matrix package. From documentation:
Matrices in IwGeom are effectively in 4x3 format, comprising a 3x3
rotation, and a 3-component vector translation.
I find it hard to obtain the same effect using this method. The translation is always applied after the rotation. More, in Marmalade, one also sets Model matrix:
IwGxSetModelMatrix( &modelMatrix );
And, apparently, rotation and translation is also applied in one order: a) rotation, b) translation.
How to obtain the OpenGL ES 1.x effect?
Marmalades IwGX wraps OpenGL and it is more similar to GLES 1.0 then GLES 2.0 as it does not requires shaders.
glTranslate and glRotate modifying view matrix.
You may replace with
CIwFMat viewMat1 = IwGxGetModelMatrix();
CIwFMat rot; rot.SetIdentity(); rot.SetRotZ(.....); // or other matrix rotation function
CIwFMat viewMat2 = viewMat1;
viewMat2.PostMult(rot); // or viewMat2.PreMult(rot);
IwGxSetModelMatrix(viewMat2);
// Draw Something
IwGxSetModelMatrix(&viewMat1);
If you use GLES 2.0 then matrix might be computed in vertex shader as well. That might be faster then CPU. CPU with NEON instructions have similar performance on iPhone 4S

Resources