What's the difference between glClient*** gl***? - opengl-es

I'm learning GLES. There are many pair functions like glClientActiveTexture/glActiveTexture.
What's the difference between them? (Especially the case of glClientActiveTexture)

From the openGL documentation:
glActiveTexture glActiveTexture selects which texture unit subsequent texture state calls will affect.
glClientActiveTexture selects the vertex array client state parameters to be modified by glTexCoordPointer.
On one hand, glClientActiveTexture is used to control subsequent glTexCoordPointer calls (with vertex arrays). On the other hand glActiveTexture affects subsequent calls to glTexCoord calls (used by display lists or immediate mode (non existent in OpenGL ES (AFAIK)).

Adding to Kenji's answer, here's a bit more detail (and bearing in mind that not everything in OpenGL is available in ES).
Terminology
But first some quick terminology to try to prevent confusion between all the texturish things (oversimplified for the same reason).
Texture Unit: A set of texture targets (e.g. GL_TEXTURE_2D). It is activated via glActiveTexture
Texture Target: A binding point for a texture. It specifies a type of texture (e.g. 2D). It is set via glBindTexture.
Texture: A thing containing texels (image data, e.g. pixels) and parameters (e.g. wrap and filter mode) that comprise a texture image. It is identified by a key/name/ID, a new one of which you get via glGenTextures, and it is actually instantiated via (first call to) glBindTexture with its name.
Summary: A texture unit has texture targets that are bound to textures.
Texture Unit Activation
In order to have multitexturing with 2D textures (for example), wherein more than one 2D texture must be bound simultaneously for the draw, you use different texture units - bind one texture on one unit's 2D target, and bind the next on another unit's 2D target. glActiveTexture is used to specify which texture unit the subsequent call to glBindTexture will apply to, and glBindTexture is used to specify which texture is bound to which target on the active texture unit.
Since you have multiple textures, you're likely to also have multiple texcoords. Therefore you need some way to communicate to OpenGL/GLSL that one set of texcoords is for one texture/unit, and another set of texcoords is for another. Usually you can do this by just having multiple texcoord attributes in your vertex and corresponding shader. If you're using older rendering techniques that rely on pre-defined vertex attributes in the shader, then you have to do something different, which is where glClientActiveTexture comes in. The confusion happens because it takes an argument for a texture unit, but it does not have the same implications as glActiveTexture of activating a texture unit in the state machine, but rather it designates which pre-defined gl_MultiTexCoord attribute (in the shader) will source the texcoords described in the subsequent call to glTexCoordPointer.
In a nutshell, glActiveTexture is for texture changes, and glClientActiveTexture is for texcoord changes. Which API calls you use depends on which rendering technique you use:
glActiveTexture sets which texture unit will be affected by subsequent context state calls, such as glBindTexture. It does not affect glMultiTexCoord, glTexCoord, or glTexCoordPointer, because those have to do with texcoords, not textures.
glClientActiveTexture sets which texture unit subsequent vertex array state calls apply to, such as glTexCoordPointer. It does not affect glBindTexture, because that has to do with textures, not texcoords.
It also does not affect glMultiTexCoord which uses DSA to select a texture unit via its target parameter, nor does it affect glTexCoord, which targets texture unit 0 (GL_TEXTURE0). This pretty much just leaves glTexCoordPointer as the thing it affects, and glTexCoordPointer is deprecated in favor of glVertexAttribPointer, which doesn't need glClientActiveTexture. As you can see, glClientActiveTexture isn't the most useful, and is thusly deprecated along with the related glWhateverPointer calls.
You'll use glActiveTexture + glBindTexture with all techniques to bind each texture on a specific texture unit. You'll additionally use glClientActiveTexture + glTexCoordPointer when you're using DrawArrays/DrawElements without glVertexAttribPointer.
Multitexturing Examples
As OpenGL has evolved over the years, there are many ways in which to implement multitextured draws. Here are some of the most common:
Use immediate mode (glBegin/glEnd pairs) with GLSL shaders targeted for version 130 or older, and use glMultiTexCoord calls to specify the texcoords.
Use glBindVertexArray and glBindBuffer + glDrawArrays/glDrawElements to specify vertex data, and calls to glVertexPointer, glNormalPointer, glTexCoordPointer, etc., to specify vertex attributes during the draw routine.
Use glBindVertexArray + glDrawArrays/glDrawElements to specify vertex data, with calls to glVertexAttribPointer to specify vertex attributes during the init routine.
Here are some examples of the API flow for multitexturing with two 2D textures using each of these techniques (trimmed for brevity):
Using immediate mode:
init
draw
shutdown
vertex shader
fragment shader
Using DrawElements but not VertexAttribPointer:
init
draw
shutdown
vertex shader (same as immediate mode)
fragment shader (same as immediate mode)
Using DrawElements and VertexAttribPointer:
init
draw
shutdown
vertex shader
fragment shader
Misc. Notes
glMultiTexCoord populates unused coords as 0,0,0,1.
e.g. glMultiTexCoord2f == s,t,0,1
Always use GL_TEXTURE0 + i, not just i.
When using GL_TEXTURE0 + i, i must be within 0 - GL_MAX_TEXTURE_COORDS-1.
GL_MAX_TEXTURE_COORDS is implementation-dependent, but must be at least two, and is at least 80 as of OpenGL 4.0.
glMultiTexCoord is supported on OpenGL 1.3+, or with ARB_multitexture
Do not call glActiveTexture or glBindTexture between glBegin and glEnd. Rather, bind all needed textures to all needed texture units before the draw call.

Related

GLES fragment shader, get 0-1 range when using TextureRegion - libGDX

I have a fragment shader in which I use v_texCoords as a base for some effects. This works fine if I use a single Texture, as v_texCoords always ranges from 0 - 1, so the center point is always (0.5, 0.5) for example. If I am drawing from part of a TextureRegion though, my shader messes up because v_texCoords no longer ranges from 0-1. Is there any methods or variabels I can use to get a consistent 0-1 range in my fragment shader? I want to avoid setting uniforms as this would mean I need to flush the batch for every draw.
Thanks!
Nothing like this exists at the shader level - TextureRegions are entirely a libgdx construct that doesn't exist at all at the OpenGL ES API level.
Honestly for what you are trying I'd simply suggest not overloading the texture coordinate for two orthogonal purposes, and just add a separate vertex attribute which provides the 0-to-1 number.

GLES2 : glTexImage2D with GL_LUMINANCE give me black screen/texture

I'm trying to render a video from a bayer buffer.
So I create a texture using GL_LUMINANCE/GL_UNSIGNED_BYTE. I apply some shaders onto this texture to generate a RGBA output.
The following call works fine on my PC and does NOT on the target board (iMX6/GLES2) :
glTexImage2D(GL_TEXTURE_2D, 0, textureFormat, m_texture_size.width(), m_texture_size.height(), 0, bufferFormat, GL_UNSIGNED_BYTE, imageData);
On the target board, I have a black texture.
bufferFormat is GL_LUMINANCE.
textureFormat is GL_LUMINANCE.
GLES2 implements a smaller subset of OpenGL API :
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexImage2D.xml
bufferFormat should be equal to the textureFormat. If I try another formats, it works on the PC. On the target board, I get a black screen and some errors reported by glGetError().
Failing other tests
If I try GL_ALPHA, it seems the texture is filled by (0,0,0,1).
If I try GL_RGBA/GL_RGBA (this makes no sense for the application but it checks the HW/API capabilities), I get a non-black texture on the board. Obviously, the image is not what I should expect.
Why does GL_LUMINANCE give me black texture ? How to make this works ?
Guesses:
the texture is not a power of two in dimensions and you have not set a compatible wrapping mode;
you have not set an appropriate mip mapping mode and the shader is therefore sampling a level other than the one you uploaded.
Does setting GL_CLAMP_TO_EDGE* and GL_LINEAR or GL_NEAREST rather than GL_LINEAR_MIPMAP_... resolve the problem?
Per section 3.8.2 of the ES 2 spec (warning: PDF):
Calling a sampler from a fragment shader will return (R, G, B, A) = (0,0,0,1) if any of the following conditions are true:
• A two-dimensional sampler is called, the minification filter is one that requires a mipmap (neither NEAREST nor LINEAR), and the sampler’s associated texture object is not complete, as defined in sections 3.7.1 and 3.7.10,
• A two-dimensional sampler is called, the minification filter is not one that requires a mipmap (either NEAREST nor LINEAR), and either dimension of the level zero array of the associated texture object is not positive.
• A two-dimensional sampler is called, the corresponding texture image is a non-power-of-two image (as described in the Mipmapping discussion of section 3.7.7), and either the texture wrap mode is not CLAMP_TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, any of the corresponding texture images are non-power-of-two images, and either the texture wrap mode is not CLAMP_- TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, and either the corresponding cube map texture image is not cube complete, or TEXTURE_MIN_FILTER is one that requires a mipmap and the texture is not mipmap cube complete.
... so my guesses are to check the first and third bullet points.

How to store and access per fragment attributes in WebGL

I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.

Replacing a VBO in an existing VAO

I have a VAO with VBOs for various vertex attributes: vertex positions, vertex normals, and the element array VBO (all STATIC_DRAW), such that rendering an instance simply requires:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, <count>, <type>, 0);
However, I want to draw multiple instances of an object (I'm restricted to the OS X GL 3.2 core profile BTW) with different vertex texture (s,t) coordinates for each instance. The texcoord VBOs use the STREAM_DRAW hint (although I might get away with DYNAMIC_DRAW).
Is it more efficient to bind the VAO, bind the current texcoord VBO, and set the attribute pointer via glVertexAttribPointer, finalize the VAO with glBindVertexArray(0) and draw the new instance with different texture coordinates? Or does the cost of updating a VAO make this a poor approach? What about updating the texcoord VBO with glBufferSubData in a bound VAO?
I'd really appreciate some feedback before benchmarking separate approaches, since the wrong choice will result in significant refactoring.
Simply create multiple VAO. vertex array objects are lightweight, and they are used to setup vertex arrays all at once...
A VBO can be bound to multiple VAO, making your life easier and faster.
If you want, at some point, another attribute configuration, throw away the old VAO and create a new one.

Array of texture identifiers to OpenGL DrawElements/DrawArrays?

An OpenGL ES sequence like this can be used to render multiple objects in one pass:
glVertexPointer(...params..., vertex_Array );
glTexCoordPointer(...params..., texture_Coordinates_Array );
glBindTexture(...params..., one_single_texture_ID );
glDrawArrays( GL_TRIANGLES, number_Triangles );
Here, the vertex array and texture coordinates array can refer to innumerable primitives that can be described in one step to OpenGL.
But do all these primitives' texture coordinates have to reference the one, single texture in the glBindTexture command?
It would be nice to pass in an array of texture identifiers:
glBindTexture(...params..., texture_identifier_array[] );
Here, there would be a texture ID in the array for every primitive shape described in the preceding calls. So, each shape's texture coordinates would pertain to the texture identified in "texture_identifier_array[]".
I can see one option is to place all textures of interest one one large texture that can be referenced as a single entity in the drawing calls. On my platform, this creates an intermediate step with a large bitmap that might cause memory issues.
It would be best for me to be able to pass an array of texture identifiers to the OpenGL ES drawing calls. Can this be done?
No, that's not possible. You could perhaps emulate it by using a texture array and giving your vertices a texture index. Then in the fragment shader you could look up the right texture with the index, but I doubt that ES supports texture arrays. And even then I don't know if this really works. Or if a texture atlas solution would be much more efficient.
If you want to render multiple versions of the same geometry (what I doubt), you're looking for instanced rendering, which also isn't supported by on ES devices, I think.
So the way to go at the moment will be a texture atlas (multiple textures in one) or just calling glDrawArrays multiple times.

Resources