Imagine you're standing on the ground looking up at a cube in the sky. As you tilt your head, the cube moves. I'm trying to replicate this using OpenGL ES on the iPhone by manipulating the tilt of the camera while looking at a simple 3D cube drawn around the origin. I'm using the gluLookAt() function from Cocos2d which is supposed to emulate the OpenGL version and it seems that when I try to tinker with any of the values, my cube disappears.
My question is: Can you provide a gluLookAt() usage here that will get me started manipulating the camera so I can figure out how this works? I'm really just interesting in learning how to tilt the camera along the Y axis.
Here is my current code:
Viewport Configuration
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _viewFramebuffer);
glViewport(0, 0, _backingWidth, _backingHeight);
Projection Matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Maybe this should be a perspective projection?? If so,
// can you provide an example using gluPerspective()?
glOrthof(-_backingWidth, _backingWidth,-_backingHeight, _backingHeight, -1, 1);
ModelView Matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt() // What goes here?
Drawing Code
static const GLfloat cubeVertices[] = {
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, -1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
};
static const GLushort cubeIndices[] = {
0, 1, 2, 3, 7, 1, 5, 4, 7, 6, 2, 4, 0, 1
};
static const GLubyte cubeColors[] = {
255, 255, 0, 255,
0, 255, 255, 255,
0, 0, 0, 0,
255, 0, 255, 255,
255, 255, 0, 255,
0, 255, 255, 255,
0, 0, 0, 0,
255, 0, 255, 255
};
glVertexPointer(3, GL_FLOAT, 0, cubeVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, cubeColors);
glEnableClientState(GL_COLOR_ARRAY);
glDrawElements(GL_TRIANGLE_STRIP, 14, GL_UNSIGNED_SHORT, cubeIndices);
I'm not completely sure what exactly you want, but here some explanations:
gluLookAt expects 3 vectors (each as 3 doubles): first the position of the camera (eye point), then the position to where you look (center point) and finally an up-vector that specifies an up-direction (this need not be the perfect orthogonal upward direction, as it is reorthogonalized anyway).
So if you stand at (0,0,5) and look at your cube (that is at the center) and want the y-axis to be the up-direction, you would call gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0) to see your cube in full beauty.
If you want to tilt your head to the side, you just need to change the up-vector and rotate it to the side a bit. Or if you want to look up, but without tilting the head to the side, you still use the y-axis as up-vector, but you just look at another point, so you change the center point to a point above and in front of you (maybe rotated about the eye position). But this won't work if you want to look straight up, in this case you need to change the up-vector to something orthogonal to the y-axis (in addition to setting the center point to a point straight above you, of course).
But I think you want a perspective projection. Your current ortho is at least quite inappropriate for your coordinates, as it specifies a coordinate system in which coordinates are in the size of pixel, so your [-1,1]-cube is about the size of a pixel on the screen. Try gluPerspective(60.0, ((double)_backingWidth)/_backingHeight, 0.1, 100.0). If you really want an orthographic projection without any realistic perspective distortion, you can use glOrtho, but in this case you should keep the size proprtions of the glOrtho parameters and your model's coordinates roughly in-sync (therefore not specify a screen-sized ortho and using coordinates in the [-1,1] range).
Related
I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).
DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.
I have some problems with OpenGL and luminosity. Let me explain you my problem :
I drew this "sprite" (it's only a plane here) with a code like that :
sprite.set_active
left, right, top, bottom = 0.0, 1.0, 1.0, 0.0
glPushMatrix
glTranslate(#position.x - 16, #position.y, #position.z)
glRotate(-90 -#window.camera.horizontal_angle, 0, 1, 0)
glScale(chara.width, chara.height, 32.0)
begin
glEnable(GL_BLEND)
glBegin(GL_QUADS)
glColor4f(1.0, 1.0, 1.0, 1.0)
glTexCoord2d(left, top); glVertex3f(0, 1, 0.5)
glTexCoord2d(right, top); glVertex3f(1, 1, 0.5)
glTexCoord2d(right, bottom); glVertex3f(1, 0, 0.5)
glTexCoord2d(left, bottom); glVertex3f(0, 0, 0.5)
glEnd
glDisable(GL_BLEND)
rescue
end
glPopMatrix
My problem is with that line :
glColor4f(1.0, 1.0, 1.0, 1.0)
Well, I can put a number lesser than 1.0 to have a darker sprite, but I can't do the contrary. How can I do that ? How can I make the sprite be totally white, for example ?
To get full control over your fragment processing, the best approach is using the programmable pipeline, where you can implement exactly what you want with GLSL code.
But there are some options that could work for this case in the fixed pipeline. The simplest one is using a different GL_TEXTURE_ENV_MODE. The default value is GL_MODULATE, which means that the color you specified with glColor4f() is multiplied with the color from the texture. As you found, that allows you to make the texture darker, but not brighter.
You could try using GL_ADD instead. As the name suggests, this will produce the final output as the sum of the texture color and the color from glColor4f(). For example:
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD);
glColor4f(0.2f, 0.2f, 0.2f, 0.0f);
would add 0.2 to the color components read from the texture.
There is more complex functionality in the fixed pipeline that gives you more control over how texture values are used to generate colors. You can find it by looking for "texture combiners". But in my personal opinion, you're much better off moving to the programmable pipeline if you need something complex enough to require texture combiners.
I've been looking into OpenGL and diving into the 3D world. This question relates to OpenGL in general but I've been using WebGL (which is OpenGL ES if I believe). I've had a problem understanding how cubes are drawn. I know that to draw a quad, you create two triangles in (default) CCW order like
1 0
|‾ ‾ ‾ ‾|
| | Indices = 0,1,2, 0,2,3 (2 triangles one face)
| |
|_ _ _ _|
2 3
However, I'm having a bit of trouble understanding the best way to draw a cube. Is there a specific way I should be drawing the cube? For example, is drawing the faces in front->right->bottom->left->top->back the best order or is it some other way. Do I draw the cube faces counter-clockwise too or something? I just need to understand the different ways I could represent a model/cube and why.
It can be done with 2 triangle strips, but strips are usually replaced with indexed triangles for more complex geometry because they become more efficient when you need multiple strips. I think it can also be done with a single strip if you use a degenerate triangle to connect the two, but indexed is usually better for larger geometry. Quads just get converted into triangles.
The winding direction is important only when backface culling is enabled. Also, things get more complicated when you add texture coordinates and surface normals for lighting. The normals can be used to make the cube look faceted or smooth shaded, which is really for larger models, like spheres. Here is a tutorial I wrote years ago for OpenGL ES 2.0:
/******************************************************************************
Function DrawCubeSmooth
Return None
Description Draw a cube using Vertex and NormalsPerVertex Arrays and
glDrawArrays with two triangle strips. Because normals are
supplied per vertex, all the triangles will be smooth shaded.
Triangle strips are used instead of an index array. The first
strip is texture mapped but the second strip is not.
******************************************************************************/
void Cube2::DrawCubeSmooth(void)
{
static GLfloat Vertices[16][3] =
{ // x y z
{-1.0, -1.0, 1.0}, // 1 left First Strip
{-1.0, 1.0, 1.0}, // 3
{-1.0, -1.0, -1.0}, // 0
{-1.0, 1.0, -1.0}, // 2
{ 1.0, -1.0, -1.0}, // 4 back
{ 1.0, 1.0, -1.0}, // 6
{ 1.0, -1.0, 1.0}, // 5 right
{ 1.0, 1.0, 1.0}, // 7
{ 1.0, 1.0, -1.0}, // 6 top Second Strip
{-1.0, 1.0, -1.0}, // 2
{ 1.0, 1.0, 1.0}, // 7
{-1.0, 1.0, 1.0}, // 3
{ 1.0, -1.0, 1.0}, // 5 front
{-1.0, -1.0, 1.0}, // 1
{ 1.0, -1.0, -1.0}, // 4 bottom
{-1.0, -1.0, -1.0} // 0
};
static GLfloat NormalsPerVertex[16][3] = // One normal per vertex.
{ // x y z
{-0.5, -0.5, 0.5}, // 1 left First Strip
{-0.5, 0.5, 0.5}, // 3
{-0.5, -0.5, -0.5}, // 0
{-0.5, 0.5, -0.5}, // 2
{ 0.5, -0.5, -0.5}, // 4 back
{ 0.5, 0.5, -0.5}, // 6
{ 0.5, -0.5, 0.5}, // 5 right
{ 0.5, 0.5, 0.5}, // 7
{ 0.5, 0.5, -0.5}, // 6 top Second Strip
{-0.5, 0.5, -0.5}, // 2
{ 0.5, 0.5, 0.5}, // 7
{-0.5, 0.5, 0.5}, // 3
{ 0.5, -0.5, 0.5}, // 5 front
{-0.5, -0.5, 0.5}, // 1
{ 0.5, -0.5, -0.5}, // 4 bottom
{-0.5, -0.5, -0.5} // 0
};
static GLfloat TexCoords[8][2] =
{ // x y
{0.0, 1.0}, // 1 left First Strip
{1.0, 1.0}, // 3
{0.0, 0.0}, // 0
{1.0, 0.0}, // 2
{0.0, 1.0}, // 4 back
{1.0, 1.0}, // 6
{0.0, 0.0}, // 5 right
{1.0, 0.0} // 7
};
glEnableVertexAttribArray(VERTEX_ARRAY);
glEnableVertexAttribArray(NORMAL_ARRAY);
glEnableVertexAttribArray(TEXCOORD_ARRAY);
// Set pointers to the arrays
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, 0, Vertices);
glVertexAttribPointer(NORMAL_ARRAY, 3, GL_FLOAT, GL_FALSE, 0, NormalsPerVertex);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, 0, TexCoords);
// Draw first triangle strip with texture map
glDrawArrays(GL_TRIANGLE_STRIP, 0, 8);
// Draw second triangle strip without texture map
glDisableVertexAttribArray(TEXCOORD_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 8, 8);
glDisableVertexAttribArray(VERTEX_ARRAY);
glDisableVertexAttribArray(NORMAL_ARRAY);
};
I hope this helps.
As the OpenGL spec states all transformations are ignored by design, but is there an easy way to draw a texture as glDrawTex does, but transforming the pixels with a matrix before?
Cann't you simply modify x/y/z arguments for glDrawTex function to transform your texture in position you want?
But if you want to rotate texture, then simply draw textured quad using two triangles. It's very simple. Assuming you have OpenGL ES version 1.1:
const float v[] = {
0, 0, 0, 0,
0, 128, 0, 1,
128, 0, 1, 0,
128, 128, 1, 1,
};
glBindTexture(GL_TEXTURE_2D, texId);
glEnable(GL_TEXTURE_2D);
glTexCoordPointer(2, GL_FLOAT, 4*4, &v[2]);
glVertexPointer(2, GL_FLOAT, 4*4, &v[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4);
(I'm assuming you are drawing using orthographic projection, and 128 is size of texture)
This way texture position can be modified using modelview matrix. Also texure matrix can be used to modify how texture is applied to triangles.
I am coding to OpenGL ES 2.0 (Webgl). I am using VBOs to draw primitives. I have vertex array, color array and array of indices. I have looked at sample codes, books and tutorial, but one thing I don't get - if color is defined per vertex how does it affect the polygonal surfaces adjacent to those vertices? (I am a newbie to OpenGL(ES))
I will explain with an example. I have a cube to draw. From what I read in OpenGLES book, the color is defined as an vertex attribute. In that case, if I want to draw 6 faces of the cube with 6 different colors how should I define the colors. The source of my confusion is: each vertex is common to 3 faces, then how will it help defining a color per vertex? (Or should the color be defined per index?). The fact that we need to subdivide these faces into triangles, makes it harder for me to understand how this relationship works. The same confusion goes for edges. Instead of drawing triangles, let's say I want to draw edges using LINES primitives. Each edge of different color. How am I supposed to define color attributes in that case?
I have seen few working examples. Specifically this tutorial: http://learningwebgl.com/blog/?p=370
I see how color array is defined in the above example to draw a cube with 6 different colored faces, but I don't understand why is defined that way. (Why is each color copied 4 times into unpackedColors for instance?)
Can someone explain how color attributes work in VBO?
[The link above seems inaccessible, so I will post the relevant code here]
cubeVertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer);
vertices = [
// Front face
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, 1.0, 1.0,
// Back face
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
// Bottom face
-1.0, -1.0, -1.0,
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
// Right face
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
// Left face
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
-1.0, 1.0, -1.0,
];
gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(vertices), gl.STATIC_DRAW);
cubeVertexPositionBuffer.itemSize = 3;
cubeVertexPositionBuffer.numItems = 24;
cubeVertexColorBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexColorBuffer);
var colors = [
[1.0, 0.0, 0.0, 1.0], // Front face
[1.0, 1.0, 0.0, 1.0], // Back face
[0.0, 1.0, 0.0, 1.0], // Top face
[1.0, 0.5, 0.5, 1.0], // Bottom face
[1.0, 0.0, 1.0, 1.0], // Right face
[0.0, 0.0, 1.0, 1.0], // Left face
];
var unpackedColors = []
for (var i in colors) {
var color = colors[i];
for (var j=0; j < 4; j++) {
unpackedColors = unpackedColors.concat(color);
}
}
gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(unpackedColors), gl.STATIC_DRAW);
cubeVertexColorBuffer.itemSize = 4;
cubeVertexColorBuffer.numItems = 24;
cubeVertexIndexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer);
var cubeVertexIndices = [
0, 1, 2, 0, 2, 3, // Front face
4, 5, 6, 4, 6, 7, // Back face
8, 9, 10, 8, 10, 11, // Top face
12, 13, 14, 12, 14, 15, // Bottom face
16, 17, 18, 16, 18, 19, // Right face
20, 21, 22, 20, 22, 23 // Left face
]
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new WebGLUnsignedShortArray(cubeVertexIndices), gl.STATIC_DRAW);
cubeVertexIndexBuffer.itemSize = 1;
cubeVertexIndexBuffer.numItems = 36;
The way I like to look at it is that each vertex is not a point in space but rather a bundle of attributes. These generally (but not always) include its location and may include its colour, texture coordinates, etc., etc., etc. A triangle (or line, or other primitive) is defined by specifying a set of vertices, then generating values for each attribute at each pixel by linearly interpolating the per-vertex values.
As Liam says, and as you've realised in your comment, this means that if you want to have a point in space that is used by a vertex for multiple primitives -- for example, the corner of a cube -- with other non-location attributes varying on a per-primitive basis, you need a separate vertex for each combination of attributes.
This is wasteful of memory to some degree -- but the complexity involved in doing it any other way would make things worse, and would require the graphics hardware to do a lot more work unpacking and repacking data. To me, it feels like the waste is comparable to the waste we get by using 32-bit RGBA values for each pixel in our video memory instead of keeping a "palette" lookup table of every colour we want to use and then just storing an index into that per-pixel (which is, of course, what we used to do when RAM was more expensive).
If the color of a set of polygons is the same then a vertex shared by all of the polygons, along with its color, can be defined once and shared by the polygons (using an index).
If the color of the polygons is different then even though the position of a vertex may be common, the color is not, and therefore the vertex as a whole cannot be shared. You will need to define the vertex for each polygon.