OpenGL ES 2.0 drawing more than one texture - opengl-es

My question is quite trivial I believe, I'm using OpenGL ES 2.0 to draw a simple 2D scene.
I have a background texture that stretches the whole screen and another texture of a flower (or shel I say sprite?) that drawn at a specific location on screen.
So the trivial why i can think of doing it is to call glDrawArrays twice, one with the vertices of the background texture, and another one with the vertices of the flower texture.
Is that the right way? if so, is that mean that for 10 flowers i'll need to call glDrawArrays 10 times?
And what about blending? what if i want to blend the flower with the background, i need both the background and flower pixel colors and that may be a problem with two draws no?
Or is it possible to do it in one draw? if so how can I create a shader that knows if it now processing the background texture vertex or the flower texture vertex?
Or is it possible to do it in one draw?     
The problem with one draw is that the shader needs to know if the current vertex is a background vertex (than use the background texture color) or a flower vertex( than use the flower texture color), and I don't know how to do it.  
Here is how I use one draw call to draw the background image stretches the whole screen and the flower is half size centered.
- (void)renderOnce {
//... set program, clear color..
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, backgroundTexture);
glUniform1i(backgroundTextureUniform, 2);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, flowerTexture);
glUniform1i(flowerTextureUniform, 3);
static const GLfloat allVertices[] = {
-1.0f, -1.0f, // background texture coordinates
1.0f, -1.0f, // to draw in whole screen
-1.0f, 1.0f, //
1.0f, 1.0f,
-0.5f, -0.5f, // flower texture coordinates
0.5f, -0.5f, // to draw half screen size
-0.5f, 0.5f, // and centered
0.5f, 0.5f, //
};
// both background and flower texture coords use the whole texture
static const GLfloat backgroundTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat flowerTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
glVertexAttribPointer(positionAttribute, 2, GL_FLOAT, 0, 0, allVertices);
glVertexAttribPointer(backgroundTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, backgroundTextureCoordinates);
glVertexAttribPointer(flowerTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, flowerTextureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

You have two choices:
Call glDrawArrays for every texture you want to draw, this will be slow if you have more than 10-20 textures, to speed it up thought you can use hardware vbo
Batch the vertices(vertices,texture coords,color) of all the sprites you want to draw in one array and use a texture atlas(a texture that has all of the pictures you want to draw in it) and draw all this with one glDrawArrays
The second way is obviously the better and the right one.To get an idea of how to do it ,look at my awnser here

Related

OpenGL - ES drawing and blending

I'm developing program that is described below.
I draw two triangles with different depths.
For below example, I'd like to split green triangle to visible part and hidden part. Then, finally using blending function, the hidden part of the green triangle is colored as transparent, and visible part is colored as original color.
Now, I write codes using opengl-ES (with JNI).
And, I have two questions.
First :
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glUseProgram(gProgram);
glGetUniformLocation(gProgram, "vColor");
const GLfloat gTriangleVertices1[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
};
float color1[] = {1.0f, 0.0f, 0.0f};
const GLfloat gTriangleVertices2[] =
{
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
float color2[] = {0.0f, 1.0f, 0.0f};
int mColorHandle1;
int mColorHandle2;
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_GREATER);
//glDepthFunc(GL_LESS);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], color2[3]);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
from this code, if I change glDepthFunc(GL_GREATER) to glDepthFunc(GL_LESS), the result shows visible and hidden part correctly.
However, I do not understand why it shows correct answer.
Because, I added vertex gTriangleVertices1, but I do not add gTriangleVertices2.
Even thou I do not add vertices of triangle 2, It gives me correct answer. why?
Second question, I think it is correct to use blending function (I checked it works on glut / freeglut). but why it doesn't work on gl-es.
///////////////////////// visible part /////////////////////////
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_LESS);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], color2[3]);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
glDisable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS); // same to initialize depth func
///////////////////////// visible part /////////////////////////
///////////////////////// hidden part /////////////////////////
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_GREATER);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], 0.5f);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
///////////////////////// hidden part /////////////////////////
I just added blending function. If I uses visible/hidden part alone, it gives correct result. However if I use blending function, it gives strange result as shown below : it gives transparent hidden green triangle.
what's wrong?
The first question:
You created a major bug by saying glDrawArrays(GL_TRIANGLES, 3, 3); as this produces an overflow on your buffer. The result of it is unexpected but in your case your compiler seems to have decided that the two arrays you defined are tightly packed:
const GLfloat gTriangleVertices1[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
};
const GLfloat gTriangleVertices2[] =
{
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
And are considered as
const GLfloat gTriangleVertices[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
So the overflow actually jumps to the part of the memory where the second vertex data is. Make no mistake of thinking what you did is correct by mistake. It is not correct and this can break on different version or for any other reason, platform, device... So fix it.
The second question:
The blending and depth buffer do not go together. You need to avoid this combination. I will not explain the reasons here (search a bit for it) but the result is undefined and you may not use that. Use one or the other.
If you will search the web on how to tackle this you will not find a correct answer for your situation though as it is very unique. What I suggest for the general solution for your case is to add a stencil buffer.
The first call should also draw to the stencil buffer which is then used to draw in the last call as well. So the last call should have a depth test disabled but stencil enabled. But doing that you should most likely simply remove the depth and use stencil only.
I am sure there are other possible solutions to this by using an alpha channel as well but in any case remember that depth buffer combined with blending is strictly forbidden and the behavior is undefined. It means it can even vary between the GPUs.

when is the indices of vertices determined to be used in gl_VertexID

I am trying to understand the behavior of gl_vertexID in vertex shaders. For that I am trying to render 2 squares using two glDrawArrays calls one after another. And want to apply red color to only one square using gl_VertexID in vertex as :
out vec4 color;
in vec4 tdk_Vertex;
void main(void)
{
if(gl_VertexID < 4)
{
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
else
{
color = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
gl_Position = tdk_Vertex;
}
Passing color to fragment shaders.
Square coordinates as :
static GLfloat vertices[] =
{ -0.75f, 0.25f, 0.0f, 1.0f,
-0.75f, 0.5f, 0.0, 1.0f,
-0.25f, 0.5f, 0.0f, 1.0f,
-0.25f, 0.25f, 0.0f, 1.0f,
0.25f, 0.25f, 0.0f, 1.0f,
0.25f, 0.5f, 0.0f, 1.0f,
0.75f, 0.5f, 0.0f, 1.0f,
0.75f, 0.25f, 0.0f, 1.0f};
Making draw calls as :
for(int i=0; i<8; i+=4)
{
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
}
Using Nvidia card, and calling two glDrawArrays calls is displaying the expected result i.e rendering red color to one square and white to other.
Thus, want to know is this correct behaviour or gl_VertexID indices should generated during glDrawArrays call so that both squares have same red color?
I am using 2 glDrawArrays calls , so my understanding is that both squares should be red according to specification :
http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_VertexID.xml
Want to test it for glsl 300 es.
In the case of glDrawArrays, the gl_VertexID is intended to be the index of the vertex within the buffer. Your first draw call renders the indices on the range [0, 4), so those are the values that gl_VertexID will take. Your second draw call renders the indices on the range [4, 8), and those are the values that gl_VertexID will take.

Opengl es render to textture appears upside down

Using frame buffer
When rendering the texture appears upsidedown, here are vertices and texture coordinate.
By the way rendering without creating frame buffer renders the texture correctely
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
Any help please ...
thanks
When uploading 2D texture images to OpenGL, it expects the data to be specified from bottom to top, even though usually images are in memory from top to bottom. You seem to have inverted your texture coordinates to work around this problem.
You should instead flip the texture data before uploading it to OpenGL and keep your texture coordinates intact. If you do that, the same texture coordinates work for both image and FBO textures.
So the solution is to flip the bitmap before calling GLUtils.texImage2D and to write your vertices as
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 1.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 0.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 0.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 1.0f // TexCoord 3
};
By the way rendering without creating frame buffer renders the texture correctely
I think it actually doesn't. With all transformations set to identity and texture coordinates matching vertex coordinates, i.e. S=X, T=Y, OpenGL assumes the origin of texture data to be in the lower left (with the noteable exception of cube maps, which are different beasts). Framebuffer color attachments, in your case your texture, agree upon that convention.
Your texture T coordinates are antiparallel to the Y vertex coordinates, which means in the case of an all identity transformation setup you flip it "upside down".
However most image file formats assume the origin in the upper left and if you upload such data as is to a OpenGL texture this adds another flip, and together with your texture coordinate flip both cancel out.
So it's very likely, that in face your regular texture code path is "flipped".

How to Crop and Scale a Texture in OpenGL

I have an input texture that is 852x640 and an output texture that is 612x612. I am passing the input through a shader and want the output to be scaled and cropped properly. I'm having trouble getting the squareCoordinates, textureCoordinates and viewPorts to work properly together.
I do not want to just crop, I want to scale it as well to get the most amount of the image as possible. If I were using Photoshop I'd do this in two steps (in OpenGL I'm trying to do this in one step):
Scale the image to 612x814
Crop off the excess 101px at each side
I'm using standard square vertices and texture vertices:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat squareTextureVertices[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
}
I don't exactly know what the viewPort should be.
Viewport would be 612x612 pixels.
To scale and crop original quad the easiest way would be to set vertices to cover 612x612 rect (in your case we leave squareVertices unchanged), but set texture coordinates so left and right sides are cropped out:
static const GLfloat squareTextureVertices[] = {
(852.0f-640.0f)/852.0f*0.5f, 0.0f,
1.0f - (852.0f-640.0f)/852.0f*0.5f, 0.0f,
(852.0f-640.0f)/852.0f*0.5f, 1.0f,
1.0f - (852.0f-640.0f)/852.0f*0.5f, 1.0f
}

Orthographic projection in OpenGL

I'm trying to set an orthographic projection using gl.glOrthof...
However, it doesn't matter which values I pass into the function, the width and height seems to get constant float values and they don't match my glOrthof attributes.
My surfaceChanged code:
gl.glViewport(0, 0, w, h);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0.0f, 10.0f, 10.0f, 0.0f, 0.0f, 1.0f);
My draw code:
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glBindTexture(GL10.GL_TEXTURE_2D, texture);
((GL11Ext) gl).glDrawTexfOES(positionX, positionY, 0.0f, 1.0f, 1.0f);
Any ideas? Tell me if you need to know something.
glDrawTexfOES width and height parameters are in pixels, so instead of
((GL11Ext) gl).glDrawTexfOES(positionX, positionY, 0.0f, 1.0f, 1.0f);
you should use
((GL11Ext) gl).glDrawTexfOES(positionX, positionY, 0.0f, texture_width, texture_height);
The projection and modelview matrix influence only the positioning of the x,y position, not the texture scaling. Selecting the part of the texture to be used is done with the crop rectangle.

Resources