How to position a textured quad in screen coordinates? - slimdx

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.

I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};

By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

Related

glm::orto in Vulkan

I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.

OpenGL Orthographic Projection and Translate

The code below draws a rectangle in 2D screen space using OpenGL ES2. How do move the drawing of the rectangle by 1 pixel to the right without modifying its vertices?
Specifically, what I am trying to do is move the coordinates 0.5 pixels to the right. I had to do this previously with GLES1.x and the reason for this is that I had problems drawing lines in the correct place unless I did a glTranslate() with 0.5f.
I'm confused about the use of glm::translate() in the code below.
If I attempt a translate of 0.5f, the whole rectangle moves from the left of the screen to the middle - a jump of about 200 pixels.
I get the same result whether I do a glm::translate on the Model or the View matrix.
Is the order of the matrix multiplication wrong and what should it be?
short g_RectFromTriIndices[] =
{
0, 1, 2,
0, 2, 3
}; // The order of vertex rendering.
GLfloat g_AspectRatio = 1.0f;
//--------------------------------------------------------------------------------------------
// LoadTwoTriangleVerticesForRect()
//--------------------------------------------------------------------------------------------
void LoadTwoTriangleVerticesForRect( GLfloat *pfRectVerts, float fLeft, float fTop, float fWidth, float fHeight )
{
pfRectVerts[ 0 ] = fLeft;
pfRectVerts[ 1 ] = fTop;
pfRectVerts[ 2 ] = 0.0;
pfRectVerts[ 3 ] = fLeft + fWidth;
pfRectVerts[ 4 ] = fTop;
pfRectVerts[ 5 ] = 0.0;
pfRectVerts[ 6 ] = fLeft + fWidth;
pfRectVerts[ 7 ] = fTop + fHeight;
pfRectVerts[ 8 ] = 0.0;
pfRectVerts[ 9 ] = fLeft;
pfRectVerts[ 10 ] = fTop + fHeight;
pfRectVerts[ 11 ] = 0.0;
}
//--------------------------------------------------------------------------------------------
// Draw()
//--------------------------------------------------------------------------------------------
void Draw( void )
{
GLfloat afRectVerts[ 12 ];
//LoadTwoTriangleVerticesForRect( afRectVerts, 0, 0, g_ScreenWidth, g_ScreenHeight );
LoadTwoTriangleVerticesForRect( afRectVerts, 50, 50, 100, 100 );
// Correct for aspect ratio so squares ARE squares and not rectangular stretchings..
g_AspectRatio = (GLfloat) g_ScreenWidth / (GLfloat) g_ScreenHeight;
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLuint hPosition = glGetAttribLocation( g_SolidProgram, "vPosition" );
// PROJECTION
glm::mat4 Projection = glm::mat4(1.0);
// Projection = glm::perspective( 45.0f, g_AspectRatio, 0.1f, 100.0f );
// VIEW
glm::mat4 View = glm::mat4(1.0);
static GLfloat transValY = 0.5f;
static GLfloat transValX = 0.5f;
//View = glm::translate( View, glm::vec3( transValX, transValY, 0.0f ) );
// MODEL
glm::mat4 Model = glm::mat4(1.0);
// static GLfloat rot = 0.0f;
// rot += 0.001f;
// Model = glm::rotate( Model, rot, glm::vec3( 0.0f, 0.0f, 1.0f ) ); // where x, y, z is axis of rotation (e.g. 0 1 0)
glm::mat4 Ortho = glm::ortho( 0.0f, (GLfloat) g_ScreenWidth, (GLfloat) g_ScreenHeight, 0.0f, 0.0f, 1000.0f );
glm::mat4 MVP;
MVP = Projection * View * Model * Ortho;
GLuint hMVP;
hMVP = glGetUniformLocation( g_SolidProgram, "MVP" );
glUniformMatrix4fv( hMVP, 1, GL_FALSE, glm::value_ptr( MVP ) );
glEnableVertexAttribArray( hPosition );
// Prepare the triangle coordinate data
glVertexAttribPointer( hPosition, 3, GL_FLOAT, FALSE, 0, afRectVerts );
// Draw the rectangle using triangles
glDrawElements( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, g_RectFromTriIndices );
glDisableVertexAttribArray( hPosition );
}
Here is the vertex shader source:
attribute vec4 vPosition;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vPosition;
}
UPDATE: I'm finding the below matrix multiplication is giving me better results. I don't know if this is "correct" or not though:
MVP = Ortho * Model * View * Projection;
That MVP seems really weird to me, you shouldn't need 4 things in there to get your MVP.. your Projection matrix should just be the Orthogonal one, so in this case
MVP = Projection * View * Ortho;
But I can also see that your Projection matrix has been commented from perspective so I don't think it's doing much right now.
By the sounds of it since you want the model co-ordinates to stay the same while moving, you want to move your camera right? So (By the looks of it your vertices are using a 1 unit per pixel co-ordinate range) doing a translate of 0.5f to your View is shifting whatever half your projection space is. Instead, you want to have something like a Camera class that you get your Viewfrom using the camera's X and Y positions.
Then you can get your View matrix using the cameras position which can share the world units system you're using, which is 1 unit per pixel.
glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, camY, 0.0), glm::vec3(0.0, 0.0, 0.0),glm::vec3(0.0, 1.0, 0.0));
I ripped that line straight (minus changing camZ for camY) from a really good 3d tutorial on camera here but the exact same concept can be applied to a orthogonal camera instead
I know it's a bit more overhead but having a cmaera class that you can control this way is nicer practice than manually using glm::translate,rotate&scale to control your viewport (and it lets you ensure that you'r working with a more obivous co-ordinate system between your camera and models co-ordinate points.

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

Convert Eular Angles from World Axises to Local

I am trying to create a VR application using iPhone's motion manager object. By VR I mean an app to show places on camera.
I can successfully visualize iPhone orientation using yaw, pitch and roll with Z -> X -> Y rotational order.
Here is a picture of what I had done till now:
So I can rotate the device and it will do rotate correctly in my Windows app that I created for monitoring.
This is correct and I will show the code for it later. But this is not what I want to do.
What I want to do is actually opposite of this. I don't want device to move. I want the world around it to rotate. So if user point the device's camera to east, he should see the "east sign" and if he changed the direction to the north, he should see the "north sign" on the screen. But accurate. Not like other applications out there that removed the roll movement.
The problem is here that when I move the device from laying on the table to portrait mode, rotating to right and left result in incorrect rotation. And if I put it in landscape mode, then top to down rotation is incorrect. In other word, rotations are based on world axes and they only works correct when device in laying on the ground. Because it is the reference frame I think.
What I want to ask here is how to convert these angles so I see the result i expect. There should be a way based on trigonometry.
These are the function I use to calculate the rotation matrix:
private static Matrix4 CreateRotationMatrix(char axis, float radians, bool rightHanded = true)
{
float c = (float)Math.Cos(radians);
float s = (float)Math.Sin(radians) * (rightHanded ? 1 : -1);
switch (axis)
{
case 'X':
return new Matrix4(
new Vector4(1.0f, 0.0f, 0.0f, 0.0f),
new Vector4(0.0f, c, -s, 0.0f),
new Vector4(0.0f, s, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Y':
return new Matrix4(
new Vector4(c, 0.0f, s, 0.0f),
new Vector4(0.0f, 1.0f, 0.0f, 0.0f),
new Vector4(-s, 0.0f, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Z':
return new Matrix4(
new Vector4(c, -s, 0.0f, 0.0f),
new Vector4(s, c, 0.0f, 0.0f),
new Vector4(0.0f, 0.0f, 1.0f, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
default:
return Matrix4.Identity;
}
}
public static Matrix4 MatrixFromEulerAngles(
Vector3 euler,
string order,
bool isRightHanded = true,
bool isIntrinsic = true)
{
if (order.Length != 3) throw new ArgumentOutOfRangeException("order", "String must have exactly 3 charecters");
// X = Pitch
// Y = Yaw
// Z = Roll
return isIntrinsic
? CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
: CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded);
}
private static float GetEulerAngle(char angle, Vector3 euler)
{
switch (angle)
{
case 'X':
return euler.X;
case 'Y':
return euler.Y;
case 'Z':
return euler.Z;
default:
return 0f;
}
}
And this is how I apply the matrix to OpenGL:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", true, true);
GL.LoadMatrix(ref projectionMatrix);
Ok, inverting was part of the answer. It helped a lot. Thanks #dari and #Spektre for the suggestion.
But the complete answer was, change the rotation direction (right-handed -> left-handed), changing the rotation order (YXZ -> ZXY, in other word, Intrinsic to Extrinsic) and Inverting the result matrix.
Before asking, I had tried all of these three alone, but never thought of using them all together.
So:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", false, false);
projectionMatrix.Invert();
GL.LoadMatrix(ref projectionMatrix);

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Resources