Does anyone know how to enable blending in OpenGL (android) on a HTC Desire. I am trying to draw colored triangles and using the alpha value of the color buffer to blend them with the background (or another triangle).
It works both on the emulator (2.1) and on a htc hero 2.1 but not on my desire with 2.2. Is there some hardware difference between a hero and a desire that causes this?
The main stuff from the code is (not in order):
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
private final static float[] colors = {
1f, 0f, 0f, 0.5f, // point 0 red
1f, 0f, 0f, 0.5f, // point 1 red
1f, 0f, 0f, 0.5f, // point 2 red
1f, 0f, 0f, 0.5f, // point 3 red
1f, 0f, 0f, 0.5f, // point 4 red
1f, 0f, 0f, 0.5f, // point 5 red
1f, 0f, 0f, 0.5f, // point 6 red
1f, 0f, 0f, 0.5f, // point 7 red
};
PS. I can provide more code if someone needs it...
Jonas, your comment about lighting seems right on, and so now I think we have an answer. The OpenGL ES 1.1.12 Specification states The value of A produced by lighting is the alpha value associated with dcm, where dcm is the material diffuse color.
If you have enabled COLOR_MATERIAL, then the material diffuse color and material ambient color both are taken from the current vertex color. This would imply the Desire is incorrect, and the emulator is correct.
If you have disabled COLROR_MATERIAL (the default state), then the diffuse color material is set with glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, ptrTo4Floats). This would imply that the Desire is correct, and the emulator is incorrect.
Related
I have started to use the Windows Composition API in UWP applications to animate elements of the UI.
Visual elements expose RotationAngleInDegrees and RotationAngle properties as well as a RotationAxis property.
When I animate a rectangular object's RotationAngleInDegrees value around the Y axis, the rectangle rotates as I would expect but in a 2D application window, it does not appear to be displaying with a 2.5D projection.
Is there a way to get the 2.5D projection effect on rotations with the composition api?
It depends to the effect that you want to have. There is a fluent design app sample on GitHub and here is the link. You will be able to download the demo from the store. And you can get some idea from depth samples. For example, flip to reveal shows a way to rotate a image card and you can find source code from here. For more details please check the sample and the demo.
In general, the animation is to rotate based on X axis:
rectanglevisual.RotationAxis = new System.Numerics.Vector3(1f, 0f, 0f);
And then use rotate animation to rotate based on RotationAngleInDegrees.
It is also possible for you to do this directly on XAML platform by using PlaneProjection from image control.
As the sample that #BarryWang pointed me to demonstrates it is necessary to apply a TransformMatrix to the page (or a parenting container) before executing the animation to get the 2.5D effect with rotation or other spatial transformation animations with the composition api.
private void UpdatePerspective()
{
Visual visual = ElementCompositionPreview.GetElementVisual(MainPanel);
// Get the size of the area we are enabling perspective for
Vector2 sizeList = new Vector2((float)MainPanel.ActualWidth, (float)MainPanel.ActualHeight);
// Setup the perspective transform.
Matrix4x4 perspective = new Matrix4x4(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, -1.0f / sizeList.X,
0.0f, 0.0f, 0.0f, 1.0f);
// Set the parent transform to apply perspective to all children
visual.TransformMatrix =
Matrix4x4.CreateTranslation(-sizeList.X / 2, -sizeList.Y / 2, 0f) * // Translate to origin
perspective * // Apply perspective at origin
Matrix4x4.CreateTranslation(sizeList.X / 2, sizeList.Y / 2, 0f); // Translate back to original position
}
My question is quite trivial I believe, I'm using OpenGL ES 2.0 to draw a simple 2D scene.
I have a background texture that stretches the whole screen and another texture of a flower (or shel I say sprite?) that drawn at a specific location on screen.
So the trivial why i can think of doing it is to call glDrawArrays twice, one with the vertices of the background texture, and another one with the vertices of the flower texture.
Is that the right way? if so, is that mean that for 10 flowers i'll need to call glDrawArrays 10 times?
And what about blending? what if i want to blend the flower with the background, i need both the background and flower pixel colors and that may be a problem with two draws no?
Or is it possible to do it in one draw? if so how can I create a shader that knows if it now processing the background texture vertex or the flower texture vertex?
Or is it possible to do it in one draw?
The problem with one draw is that the shader needs to know if the current vertex is a background vertex (than use the background texture color) or a flower vertex( than use the flower texture color), and I don't know how to do it.
Here is how I use one draw call to draw the background image stretches the whole screen and the flower is half size centered.
- (void)renderOnce {
//... set program, clear color..
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, backgroundTexture);
glUniform1i(backgroundTextureUniform, 2);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, flowerTexture);
glUniform1i(flowerTextureUniform, 3);
static const GLfloat allVertices[] = {
-1.0f, -1.0f, // background texture coordinates
1.0f, -1.0f, // to draw in whole screen
-1.0f, 1.0f, //
1.0f, 1.0f,
-0.5f, -0.5f, // flower texture coordinates
0.5f, -0.5f, // to draw half screen size
-0.5f, 0.5f, // and centered
0.5f, 0.5f, //
};
// both background and flower texture coords use the whole texture
static const GLfloat backgroundTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat flowerTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
glVertexAttribPointer(positionAttribute, 2, GL_FLOAT, 0, 0, allVertices);
glVertexAttribPointer(backgroundTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, backgroundTextureCoordinates);
glVertexAttribPointer(flowerTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, flowerTextureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
You have two choices:
Call glDrawArrays for every texture you want to draw, this will be slow if you have more than 10-20 textures, to speed it up thought you can use hardware vbo
Batch the vertices(vertices,texture coords,color) of all the sprites you want to draw in one array and use a texture atlas(a texture that has all of the pictures you want to draw in it) and draw all this with one glDrawArrays
The second way is obviously the better and the right one.To get an idea of how to do it ,look at my awnser here
I am using opengl ES for my iphone game. To scale and rotate my object i do this:
glScalef( scaleX , scaleY ,1);
glRotatef(rotationZ, 0.0f, 0.0f, 1.0f)
I am using an ortho screen with orthof(-1,1,-1,1,-1,1). My problem is when i rotate objects, the image gets skewed. I understand why that is happening as i am scaling wrt to the screen size so while rotating it changes the image size.
What can i do to prevent it from getting skewed.
glViewport(0,0, (GLint)screenWidth, (GLint)screenHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1,1,-1,1,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(positionX, positionY,0.0f);
glScalef(scaleX , scaleY ,1);
glRotatef(rotationZ, 0.0f, 0.0f, 1.0f);
Use an ortho projection that matches your aspect ratio of the screen rather than just sending a bunch of ones. Unless you have a square screen, your left/right shouldn't be the same as your top/bottom or you will see skew.
Using frame buffer
When rendering the texture appears upsidedown, here are vertices and texture coordinate.
By the way rendering without creating frame buffer renders the texture correctely
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
Any help please ...
thanks
When uploading 2D texture images to OpenGL, it expects the data to be specified from bottom to top, even though usually images are in memory from top to bottom. You seem to have inverted your texture coordinates to work around this problem.
You should instead flip the texture data before uploading it to OpenGL and keep your texture coordinates intact. If you do that, the same texture coordinates work for both image and FBO textures.
So the solution is to flip the bitmap before calling GLUtils.texImage2D and to write your vertices as
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 1.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 0.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 0.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 1.0f // TexCoord 3
};
By the way rendering without creating frame buffer renders the texture correctely
I think it actually doesn't. With all transformations set to identity and texture coordinates matching vertex coordinates, i.e. S=X, T=Y, OpenGL assumes the origin of texture data to be in the lower left (with the noteable exception of cube maps, which are different beasts). Framebuffer color attachments, in your case your texture, agree upon that convention.
Your texture T coordinates are antiparallel to the Y vertex coordinates, which means in the case of an all identity transformation setup you flip it "upside down".
However most image file formats assume the origin in the upper left and if you upload such data as is to a OpenGL texture this adds another flip, and together with your texture coordinate flip both cancel out.
So it's very likely, that in face your regular texture code path is "flipped".
This is OpenGL on iPhone 4.
Im drawing scene using light and materials. Here is snippet of my code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1, 1, -1, 1, -1, 1);
CGFloat ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f };
CGFloat diffuseLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
CGFloat direction[] = { 0.0f, 0.0f, -20.0f, 0 };
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight);
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glShadeModel(GL_FLAT);
glEnable(GL_LIGHTING);
glDisable(GL_COLOR_MATERIAL);
float blankColor[4] = {0,0,0,1};
float whiteColor[4] = {1,1,1,1};
float blueColor[4] = {0,0,1,1};
glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor);
glEnable(GL_CULL_FACE);
glVertexPointer(3, GL_FLOAT, 0, verts.pdata);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals.pdata);
glEnableClientState(GL_NORMAL_ARRAY);
glDrawArrays (GL_TRIANGLES, 0, verts.size/3);
Problem is that instead of seeing BLUE diffuse color I see it white. It fades out if I rotate model's side but I can't understand why its not using my blue color.
BTW if I change glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor) to glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, blueColor) then I do see blue color. If I do it glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor); and then glMaterialfv(GL_BACK, GL_DIFFUSE, blueColor); I see white color again. So it looks like GL_FRONT_AND_BACK shows it but rest of combinations show white. Anyone can explain it to me?
This is because of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here