DirectX 9 - drawing a 2D sprite in its exact dimensions - windows

I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).

DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.

Related

OpenSceneGraph osg::Quat: shape not rotating

I have a small function to create a new instance of a WorldObject.
I want to use osg::ref_ptr<osg::PositionAttitudeTransform> for translation and rotation but there is a problem I can't figure out.
I use setTranslation() with a Vec3 which works very well. But the Quat with makeRotation() just does nothing.
Here is the code:
osg::ref_ptr <osg::PositionAttitudeTransform> getWorldObjectClone(const char* name, osg::Vec3 position = osg::Vec3(0, 0, 0), osg::Vec3 rotation = osg::Vec3(0, 0, 0))
{
osg::ref_ptr <osg::PositionAttitudeTransform> tmp = new osg::PositionAttitudeTransform;
osg::Quat q(0, osg::Vec3(0, 0, 0));
tmp = dynamic_cast<osg::PositionAttitudeTransform*>(WorldObjects[name]->clone(osg::CopyOp::DEEP_COPY_ALL));
tmp->setPosition(position);
q.makeRotate(rotation.x(), 1, 0, 0);
q.makeRotate(rotation.y(), 0, 1, 0);
q.makeRotate(rotation.z(), 0, 0, 1);
tmp->setAttitude(q);
return tmp;
}
I tried rotation = {90,0,0} (degrees) and rotation = {1,0,0} (radians) but both have no effect. Is there an mistake in how the code is using the Quat?
The rotation method you are using works with radians.
If you want to rotate 90 degrees around the X axis, you need to call:
q.makeRotate(osg::PI_2, 1, 0, 0 );
// or the equivalent
q.makeRotate(osg::PI_2, osg::X_AXIS);
Keep in mind that every call to makeRotate will reset the full quaternion to the given rotation. If you're trying to concatenate several rotations, you have to multiply the corresponding quaternions.
For instance:
osg::Quar xRot, yRot;
// rotate 90 degrees around x
xRot.makeRotate(osg::PI_2, osg::X_AXIS);
// rotate 90 degrees around y
yRot.makeRotate(osg::PI_2, osg::Y_AXIS);
// concatenate the 2 into a resulting quat
osg::Quat fullRot = xRot * yRot;

Transform matrices with Matrix.CreatePerspectiveOffCenter in XNA: vanishing point to center

I'm trying to get the following perspective of view:
In essence I'm doing a 2D game with some 3D graphics so I switched from Matrix.CreateOrthographicOffCenter to Matrix.CreatePerspectiveOffCenter
I have drawn a primitive and by decreasing it's z-index it goes further away, but it always vanishes in direction of the (0, 0) (top-left), while the vanishing point should be the center.
My transform settings now look like this ((640, 360) is the center of the screen):
basicEffect.Projection = Matrix.CreatePerspectiveOffCenter(0, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, 0, 1, 10);
basicEffect.View = Matrix.Identity * Matrix.CreateLookAt(new Vector3(640, 360, 1), new Vector3(640, 360, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);
I can't get the vanishing point to the center of the screen. I managed to (sort of) do it with CreatePerspective view but I want to keep using CreatePerspectiveOffCenter because I can translate normal pixel positions easily to the 3D space. What am I missing?
In the end I used the following. If you're looking for a solution to create a 3d view with a '2D feel' this might come in handy. With these translations a z-index of 0 exactly matches the screen's width and height and the vanishing point is in the center of the screen.
basicEffect.Projection = Matrix.CreatePerspectiveFieldOfView((float)Math.PI / 2f, 1, 1f / 1000, 1000f);
basicEffect.View = Matrix.CreateLookAt(new Vector3(0, 0, 1f), new Vector3(0, 0, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Vertex Coordinates in OpenGL

How come vertex coordinates in OpenGL range from -1 to 1? Is there a way to be able to specify vertex coordinates using the same coordinates as the screen?
So instead of:
float triangleCoords[] = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
I could have
float triangleCoords[] = {
// X, Y, Z
80, 60, 0,
240, 60, 0,
0, 375, 0
};
Just seems a bit much that I need to get out a calculator just so I can hard code in some vertex coordinates. It's not like I'm gonna be trying to lay things out and think "yeah, that should go right around (0, 0.559016994), that'll look perfect..."
This is what projection matrices are for. They project from your coordinate frame to the normalized coordinates.
You're free to setup vertices in pixels if you want, but then you just have to pair it with a proper projection matrix to tell opengl how to transform those coordinates to the screen.
Given your example:
float triangleCoords[] = {
// X, Y, Z
80, 60, 0,
240, 60, 0,
0, 375, 0
};
If you pair this with an orthographic projection matrix (similar to one generated by glOrtho(0, width, 0, height, -1, 1), then your triangle will draw at the pixel coordinates described.

OpenGL quads smaller than expected

Running on iPad.
Mapping a texture that is 256x256 onto a quad. I'm trying to render it exactly the same size as the actual image. The quad looks correct (shape is right, texture mapped correctly), but it is only ~75% of the size of the actual .png.
Not sure why.
The code is characterized as follows (excerpts below):
Screen is 768x1024. Windows is 768x1024 as well.
glViewport(0, 0, 768, 1024); // aspect ratio 1:1.333
glOrthof(-0.5f, 0.5f, -0.666f, 0.666f, -1.0f, 1.0f); // matching aspect ratio with 0,0 centered
// Sets up an array of values to use as the sprite vertices.
//.25 of 1024 is 256 pixels so the quad (centered on 0,0) spans -0.125,
//-0.125 to 0.125, 0.125 (bottom left corner and upper right corner)
GLfloat spriteVertices[] = {
-0.125f, -0.125f,
0.125f, -0.125f,
-0. 125f, 0.125f,
0.125f, 0.125f,
};
// Sets up an array of values for the texture coordinates.
const GLshort spriteTexcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
followed by the appropriate calls to:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords);
then
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Why is my sprite smaller than 256x256 when rendered?
Your output is 192x192 (approx) because your quad is the wrong size. It's 0.25x0.25 and the "unit length" direction is X which is 768 wide, so it's 0.25 * 768 = 192. If you switched your glOrthof so that top/bottom were -0.5 and +0.5 (with appropriate correction to X) it would work.

Resources