Objects look weird with first-person camera in DirectX - matrix

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!

The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Related

Let PShapes in an array rotate on its own axis in Processing

I have this code that basically reads each pixel of an image and redraws it with different shapes. All shapes will get faded in using a sin() wave.
Now I want to rotate every "Pixelshape" around its own axis (shapeMode(CENTER)) while they are faded in and the translate function gives me a headache in this complex way.
Here is the code so far:
void setup() {
size(1080, 1350);
shapeMode(CENTER);
img = loadImage("loremipsum.png");
…
}
void draw() {
background(123);
for (int gridX = 0; gridX < img.width; gridX++) {
for (int gridY = 0; gridY < img.height; gridY++) {
// grid position + tile size
float tileWidth = width / (float)img.width;
float tileHeight = height / (float)img.height;
float posX = tileWidth*gridX;
float posY = tileHeight*gridY;
// get current color
color c = img.pixels[gridY*img.width+gridX];
// greyscale conversion
int greyscale = round(red(c)*0.222+green(c)*0.707+blue(c)*0.071);
int gradientToIndex = round(map(greyscale, 0, 255, 0, shapeCount-1));
//FADEIN
float wave = map(sin(radians(frameCount*4)), -1, 1, 0, 2);
//translate(HEADACHE);
rotate(radians(wave));
shape(shapes[gradientToIndex], posX, posY, tileWidth * wave, tileHeight * wave);
}
}
I have tried many calculations but it just lets my sketch explode.
One that worked in another sketch where I tried basically the same but just in loop was (equivalent written):
translate(posX + tileWidth/2, posY + tileHeight/2);
I think I just don't get the matrix right? How can I translate them to its meant place?
Thank you very much #Rabbid76 – at first I just pasted in your idea and it went of crazy – then I added pushMatrix(); and popMatrix(); – turned out your translate(); code was in fact right!
Then I had to change the x and y location where every shape is drawn to 0,0,
And this is it! Now it works!
See the code here:
float wave = map(sin(radians(frameCount*4)), -1, 1, 0, 2);
pushMatrix();
translate(posX + tileWidth/2, posY + tileHeight/2);
rotate(radians(wave*180));
shape(shapes[gradientToIndex], 0, 0, tileWidth*wave , tileHeight*wave );
popMatrix();
PERFECT! Thank you so much!
rotate defines a rotation matrix and multiplies the current matrix by the rotation matrix. rotate therefore causes a rotation by (0, 0).
You have to center the rectangle around (0, 0), rotate it and move the rotated rectangle to the desired position with translate.
Since translate and rotate multiplies the current matrix by a new matrix, you must store and restore the matrix by pushMatrix() respectively popMatrix().
The center of a tile is (posX + tileWidth/2, posY + tileHeight/2):
pushMatrix();
translate(posX + tileWidth/2, posY + tileHeight/2);
rotate(radians(wave));
shape(shapes[gradientToIndex],
-tileWidth*wave/2, -tileHeight*wave/2,
tileWidth * wave, tileHeight * wave);
popMatrix();

DirectX 9 - drawing a 2D sprite in its exact dimensions

I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).
DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.

OpenSceneGraph osg::Quat: shape not rotating

I have a small function to create a new instance of a WorldObject.
I want to use osg::ref_ptr<osg::PositionAttitudeTransform> for translation and rotation but there is a problem I can't figure out.
I use setTranslation() with a Vec3 which works very well. But the Quat with makeRotation() just does nothing.
Here is the code:
osg::ref_ptr <osg::PositionAttitudeTransform> getWorldObjectClone(const char* name, osg::Vec3 position = osg::Vec3(0, 0, 0), osg::Vec3 rotation = osg::Vec3(0, 0, 0))
{
osg::ref_ptr <osg::PositionAttitudeTransform> tmp = new osg::PositionAttitudeTransform;
osg::Quat q(0, osg::Vec3(0, 0, 0));
tmp = dynamic_cast<osg::PositionAttitudeTransform*>(WorldObjects[name]->clone(osg::CopyOp::DEEP_COPY_ALL));
tmp->setPosition(position);
q.makeRotate(rotation.x(), 1, 0, 0);
q.makeRotate(rotation.y(), 0, 1, 0);
q.makeRotate(rotation.z(), 0, 0, 1);
tmp->setAttitude(q);
return tmp;
}
I tried rotation = {90,0,0} (degrees) and rotation = {1,0,0} (radians) but both have no effect. Is there an mistake in how the code is using the Quat?
The rotation method you are using works with radians.
If you want to rotate 90 degrees around the X axis, you need to call:
q.makeRotate(osg::PI_2, 1, 0, 0 );
// or the equivalent
q.makeRotate(osg::PI_2, osg::X_AXIS);
Keep in mind that every call to makeRotate will reset the full quaternion to the given rotation. If you're trying to concatenate several rotations, you have to multiply the corresponding quaternions.
For instance:
osg::Quar xRot, yRot;
// rotate 90 degrees around x
xRot.makeRotate(osg::PI_2, osg::X_AXIS);
// rotate 90 degrees around y
yRot.makeRotate(osg::PI_2, osg::Y_AXIS);
// concatenate the 2 into a resulting quat
osg::Quat fullRot = xRot * yRot;

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

How to position a textured quad in screen coordinates?

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

Resources