How to set the OpenGL project matrix to get projection as described below? - opengl-es

I just want to mimic a perspective projection without using z coordinate (ie: Only using 2d environment).
The x axis must shrink as y axis increases towards the top.
I have following code as ready,
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, size.width, 0, size.height, -1024 * CC_CONTENT_SCALE_FACTOR(), 1024 * CC_CONTENT_SCALE_FACTOR());
GLfloat proj[16] = { ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,? };
glMultMatrixf(proj)

Related

Example of OpenGL game coordinates system - done right?

Well it is not surprise what default OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
So i decided to write some wrapper to local game coords with next main ideas:
Screen coords will be 0..100.0 (x-axis), 0..100.0 (y-axis) with (0.0,0.0) in bottom left corner of screen.
There are different screens, with different aspects.
If we draw quad, it must stay quad, not squashed rectangle.
By the quad i mean
quad_vert[0].x = -0.5f;
quad_vert[0].y = -0.5f;
quad_vert[0].z = 0.0f;
quad_vert[1].x = 0.5f;
quad_vert[1].y = -0.5f;
quad_vert[1].z = 0.0f;
quad_vert[2].x = -0.5f;
quad_vert[2].y = 0.5f;
quad_vert[2].z = 0.0f;
quad_vert[3].x = 0.5f;
quad_vert[3].y = 0.5f;
quad_vert[3].z = 0.0f;
I will use glm::ortho and glm::mat4 to achieve this:
#define LOC_SCR_SIZE 100.0f
typedef struct coords_manager
{
float SCREEN_ASPECT;
mat4 ORTHO_MATRIX;//glm 4*4 matrix
}coords_manager;
glViewport(0, 0, screen_width, screen_height);
coords_manager CM;
CM.SCREEN_ASPECT = (float) screen_width / screen_height;
For example our aspect will be 1.7
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE, 0.0f, LOC_SCR_SIZE);
Now bottom left is (0,0) and top right is (100.0, 100.0)
And it works, well mostly, now we can translate our quad to (25.0, 25.0), scale it to (50.0, 50.0) and it will sit at bottom-left corner with size of 50% percent of screen.
But problem is what it not quad anymore it looks like rectangle, because our screen width not equal with height.
So we use our screen aspect:
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE * CM.SCREEN_ASPECT, 0.0f, LOC_SCR_SIZE);
Yeah we get right form but another problem - if we position it at (50,25) we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore, it's now 0..170 (because we multiply by our aspect of 1.7), so we use next function before setting our quad translation
void loc_pos_to_gl_pos(vec2* pos)
{
pos->x = pos->x * CM.SCREEN_ASPECT;
}
And viola, we get right form squad at right place.
But question is - am i doing this right?
OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
Yes, but you will never use them directly. There's usually always a projection matrix, that transforms your coordinates into the right space.
we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore,
That's why OpenGL maps NDC space (0,0,0) to the screen center. If you draw a quad with coordinates symmetrically around the origin it will stay in the center.
But question is - am i doing this right?
Depends on what you want to achieve.

How to position a textured quad in screen coordinates?

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

GopenGL align texture coordinates to world, not to face

What I currently have: Generating of texture coordinates given the width and height of a rectanlge along with a scale factor for the texture. this is woking fine:
vertices = new float[] {
0, 0, 0, this.height / (this.texture.height * this.texScaleHeight),
this.width, 0, this.width / (this.texture.width * this.texScaleWidth), height / (this.texture.height * this.texScaleHeight),
this.width, this.height, this.width / (this.texture.width * this.texScaleWidth), 0,
0, this.height, 0, 0 };
What I want to do now, is that some rectangles with different positions (ie. next to each other) have a seamless graduation for the textures. I tried the following but with no good result.
vertices = new float[] {
0, 0, -this.getPosition().x * this.texScaleWidth, this.height / (this.texture.height * this.texScaleHeight) -this.getPosition().y * this.texScaleHeight,
this.width, 0, this.width / (this.texture.width * this.texScaleWidth) -this.getPosition().x * this.texScaleWidth, height / (this.texture.height * this.texScaleHeight)-this.getPosition().y * this.texScaleHeight,
this.width, this.height, this.width / (this.texture.width * this.texScaleWidth) -this.getPosition().x * this.texScaleWidth, -this.getPosition().y * this.texScaleHeight,
0, this.height, -this.getPosition().x * this.texScaleWidth, -this.getPosition().y * this.texScaleHeight };
How can i achieve that textures are "aligned to the world" and not to the rectangle they sit on?
This technique is potentially way too general, but here it goes: You could project the texture coordinates onto the geometry.
The overall workflow is just the same as handling geometry itself, but here, all world points are assigned texture coordinates. Push every point in the geometry through another MVP matrix (representing the projector) and take the (x, y) result for the texture coordinate of this very point (assuming orthographic projection); perspective requires division by w afterwards.
I hope you get the idea. Searching for it, I failed to find a decent description or tutorial. Shadow mapping uses the same technique, but its goal is to determine whether a point is shadowed or not; here, the goal is to find texture coordinates for a general projector.

Opengl-es 1.x add shadow?

Is there an easy way to add shadows in opengl-es 1.x? Or only in 2.0?
For projecting a shadow on a plane there's a simple way (not very efficient, but simple).
This function is not mine, I forget were I found it. What it does is create a matrix projection that maps everything you draw onto a single plane.
static inline void glShadowProjection(float * l, float * e, float * n)
{
float d, c;
float mat[16];
// These are c and d (corresponding to the tutorial)
d = n[0]*l[0] + n[1]*l[1] + n[2]*l[2];
c = e[0]*n[0] + e[1]*n[1] + e[2]*n[2] - d;
// Create the matrix. OpenGL uses column by column
// ordering
mat[0] = l[0]*n[0]+c;
mat[4] = n[1]*l[0];
mat[8] = n[2]*l[0];
mat[12] = -l[0]*c-l[0]*d;
mat[1] = n[0]*l[1];
mat[5] = l[1]*n[1]+c;
mat[9] = n[2]*l[1];
mat[13] = -l[1]*c-l[1]*d;
mat[2] = n[0]*l[2];
mat[6] = n[1]*l[2];
mat[10] = l[2]*n[2]+c;
mat[14] = -l[2]*c-l[2]*d;
mat[3] = n[0];
mat[7] = n[1];
mat[11] = n[2];
mat[15] = -d;
// Finally multiply the matrices together *plonk*
glMultMatrixf(mat);
}
Use it like this:
Draw your object.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
Suply it with a light source position, a plane where the shadow will be projected and the normal.
float lightPosition[] = {383.0, 461.0, 500.0, 0.0}
float n[] = { 0.0, 0.0, -1.0 }; // Normal vector for the plane
float e[] = { 0.0, 0.0, beltOrigin+1 }; // Point of the plane
glShadowProjection(lightPosition,e,n);
Ok, shadow matrix is applied.
Change the drawing color to something that fits.
glColor4f(0.3, 0.3, 0.3, 0.9);
Draw your object again.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
That is why this is not efficient, the more complex the object the more useless triangles you waste just for a shadow.
Also remember that every manipulation you made to the unshadowed object needs to be done after the shadow matrix is applied.
For more complex stuff the subject is a bit broad, and depends a lot on your scene and complexity.
Projective texture mapped shadows like they were done with OpenGL-1.2 without shaders are possible. Look for older shadow mapping tutorials, written between 1999 and 2002.

Resources