OpenGL ES set Texture matrix for different Texturing units - opengl-es

with
glMatrixMode(GL_TEXTURE);
..some matrix operations...
i can change the current texture transformation matrix. However - it seems it affects not all texture units (i'm using multitexturing)
how can i change the texture matrix for different texture units?
thanks!

Try using glActiveTexture to select the appropriate texture matrix stack. This works for OpenGL, and I assume that it should also work for OpenGL ES.

Related

OpenGL ES 2.0 Vertex Shader Texture Reads not possible from FBO?

I'm currently working on a GPGPU project that uses OpenGL ES 2.0. I have a rendering pipeline that uses framebuffer objects (FBOs) as targets, i.e. the result of each rendering pass is saved in a texture which is attached to an FBO. So far, this works when using fragment shaders. For example I have to following rendering pipeline:
Preprocessing (downscaling, grayscale conversion)
-> Adaptive Thresholding Pass 1 -> Adapt. Thresh. Pass 2
-> Copy back to CPU
However, I wanted to extend this pipeline by adding a grayscale histogram calculation after the proprocessing step. With OpenGL ES 2.0 this only works with texture reads in the vertex shader, as far as I know [1]. I can confirm that my shaders work in a different program where the input is a "real" image, not a rendered texture that is attached to an FBO. Hence I think it is not possible to read texture data in a vertex shader if it comes from an FBO. Can anyone confirm this assumption or am I missing something? I'm using a Nexus 10 for my experiments.
[1]: It basically works by reading each pixel value from the texture in the vertex shader, then calculating of the histogram bin from it and "adding" it in the fragment shader by using alpha blending.
Texture reads within a vertex shader are not a required element in OpenGL ES 2.0, so you'll find some manufacturers supporting them and some not. In fact, there was a weird situation where iOS supported it on some devices for one version of iOS, but not the next (and it's now officially supported in iOS 7). That might be the source of the inconsistency you see here.
To work around this, I implemented a histogram calculation by instead feeding the colors from the FBO (or its attached texture) in as vertices and using a scattering operation similar to what you describe. This doesn't require a texture read of any kind in the vertex shader, but it does involve a round-trip from and to the GPU and potentially a lot of vertices. It works on all OpenGL ES 2.0 hardware, but it can be costly.

How to draw point sprites of different sizes in OpenGL?

I'm making a small OpenGL Mac app that uses point sprites. I'm using a vertex array to draw them, and I want to use a similar "array" function to give them all different sizes.
In OpenGL ES, there is a client state called GL_POINT_SIZE_ARRAY_OES, and a corresponding function glPointSizePointerOES() which do exactly what I want, but I can't seem to find an equivalent in standard OpenGL.
Does OpenGL support this in any way?
To expand a little on Fen's answer, the fixed function OpenGL pipeline can't do exactly what you want. It can do 'perspective' points which get smaller as the Z distance increases, but that's all.
For arbitrary point size at each vertex you need a custom vertex shader to set the size for each. Pass the point sizes either as an attribute array (re-use surface normals or tex coords, or use your own attribute index) or in a texture map, say a 1D texture with width equal to size of points array. The shader code example referred to by Fen uses the texture map technique.
OpenGL does not support this Apple extension, but you can do it other other way:
For fixed pipeline: (opengl 1.4 and above)
You need to setup point parameters:
float attenuation[3] = {0.0f, 1.0f, 0.0f};
glPointParameterfvEXT(GL_POINT_DISTANCE_ATTENUATION, attenuation);
glPointParameterfEXT(GL_POINT_SIZE_MIN, 1.0f);
glPointParameterfEXT(GL_POINT_SIZE_MAX, 128.0f);
glEnable(GL_POINT_SPRITE);
OpenGL will calculate point size for you that way
Shaders
Here is some info for rendering using shaders:
http://en.wikibooks.org/wiki/OpenGL_Programming/Scientific_OpenGL_Tutorial_01
If by "Does OpenGL support this", you mean "Can I do something like that in OpenGL", absolutely.
Use shaders. Pass a 1-dimensional generic vertex attribute that represents your point size. And in your vertex shader, set that point size as the gl_PointSize output from the vertex shader. It's really quite simple.
If you meant, "Does fixed-function OpenGL support this," no.

Opengl-es 1.1 change pixel data on the FBO

Is there a way to get the FBO pixel data and then: greyscale it fast and take back the image to that FBO again?
If you're using the fixed-function pipeline (ES 1.1), you can use glReadPixels to pull pixel data off the GPU so you can process it directly. Then you'd need to create a texture from that result, and render a quad mapped to the new texture. But this is a fairly inefficient way of accomplishing the result.
If you're using shaders (ES 2.0), you can do this on the GPU directly, which is faster. That means doing the greyscaling in a fragment shader in one of a few ways:
If your rendering is simple to begin with, you can add the greyscale math in your normal fragment shader, and perhaps toggle it with a boolean uniform variable.
If you don't want to mess with greyscale in your normal pipeline, you can render normally to an offscreen FBO (texture), and then render the contents of that texture to the screen's FBO using a special greyscale texturing shader that does the math on sampled texels.
Here's the greyscale math if you need it: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx Essentially, plug the RGB values into that formula, and use the resulting luminance value in all your channels.

Returning values from a OpenGL ES 2.0 shader

Is it possible to get any values out of a OpenGL ES 2.0 shader? I'd like to use the gpu to do some processing (not 3D). The only thing I could think of is to render to the canvas and then to use readPixels to get the colors (preferably in a large 2d array).
Yes, that's called GPGPU. The only way is to draw to a framebuffer or a texture, here is a tutorial that explains it, just stick to the GLSL version.

Array of texture identifiers to OpenGL DrawElements/DrawArrays?

An OpenGL ES sequence like this can be used to render multiple objects in one pass:
glVertexPointer(...params..., vertex_Array );
glTexCoordPointer(...params..., texture_Coordinates_Array );
glBindTexture(...params..., one_single_texture_ID );
glDrawArrays( GL_TRIANGLES, number_Triangles );
Here, the vertex array and texture coordinates array can refer to innumerable primitives that can be described in one step to OpenGL.
But do all these primitives' texture coordinates have to reference the one, single texture in the glBindTexture command?
It would be nice to pass in an array of texture identifiers:
glBindTexture(...params..., texture_identifier_array[] );
Here, there would be a texture ID in the array for every primitive shape described in the preceding calls. So, each shape's texture coordinates would pertain to the texture identified in "texture_identifier_array[]".
I can see one option is to place all textures of interest one one large texture that can be referenced as a single entity in the drawing calls. On my platform, this creates an intermediate step with a large bitmap that might cause memory issues.
It would be best for me to be able to pass an array of texture identifiers to the OpenGL ES drawing calls. Can this be done?
No, that's not possible. You could perhaps emulate it by using a texture array and giving your vertices a texture index. Then in the fragment shader you could look up the right texture with the index, but I doubt that ES supports texture arrays. And even then I don't know if this really works. Or if a texture atlas solution would be much more efficient.
If you want to render multiple versions of the same geometry (what I doubt), you're looking for instanced rendering, which also isn't supported by on ES devices, I think.
So the way to go at the moment will be a texture atlas (multiple textures in one) or just calling glDrawArrays multiple times.

Resources