hoping someone can help me. I have a go kart on a terrain, from which I get the position of the centre of the go kart, the left front wheel and the right front wheel. What I need is a matrix to rotate the kart so it sits flat on the terrain.
All vector3's. Forward is direction vector.
So far I have this but It's not working as expected
D3DXVECTOR3 cross, cross1;
cross = kartposition - Lwheelposition;
cross1 = kartposition - Rwheelposition;
D3DXVec3Cross(&up, &cross, &cross1);
D3DXVec3Normalize(&up, &up);
D3DXVec3Cross(&right, &forward,&up);
D3DXVec3Normalize(&right, &right);
D3DXVec3Cross(&forward, &up,&right);
D3DXVec3Normalize(&forward, &forward);
D3DXMatrixLookAtLH(&localtransform[0], &D3DXVECTOR3(0, 0, 0) ,&(forward), &up);
Any advise would be great.
Solved I believe.
Have change 2 lines to
cross = Lwheelposition-kartposition ;
cross1 =Rwheelposition - kartposition ;
and added to the end
D3DXMatrixInverse(&localtransform[0],NULL,&localtransform[0]);
to invert the action of the look at function.
I have since revised and solved this by referring to
(DirectX) Generating a rotation matrix to match a vector
Final code is as follows.
D3DXVECTOR3 cross, cross1;
cross = Lwheelposition - kartposition;
cross1 = Rwheelposition - kartposition;
D3DXVec3Cross(&up, &cross, &cross1);
D3DXVec3Normalize(&up, &up);
D3DXVec3Cross(&right, &up,&forward);
D3DXVec3Normalize(&right, &right);
D3DXVec3Cross(&kartforward, &right, &up);
localtransform[0]=D3DXMATRIX (right.x, right.y, right.z, 0.0f,
up.x, up.y, up.z, 0.0f,
kartforward.x, kartforward.y, kartforward.z, 0.0f,
kartposition.x, kartposition.y, kartposition.z, 1.0f);
Hope this is helpful.
Related
I have started to use the Windows Composition API in UWP applications to animate elements of the UI.
Visual elements expose RotationAngleInDegrees and RotationAngle properties as well as a RotationAxis property.
When I animate a rectangular object's RotationAngleInDegrees value around the Y axis, the rectangle rotates as I would expect but in a 2D application window, it does not appear to be displaying with a 2.5D projection.
Is there a way to get the 2.5D projection effect on rotations with the composition api?
It depends to the effect that you want to have. There is a fluent design app sample on GitHub and here is the link. You will be able to download the demo from the store. And you can get some idea from depth samples. For example, flip to reveal shows a way to rotate a image card and you can find source code from here. For more details please check the sample and the demo.
In general, the animation is to rotate based on X axis:
rectanglevisual.RotationAxis = new System.Numerics.Vector3(1f, 0f, 0f);
And then use rotate animation to rotate based on RotationAngleInDegrees.
It is also possible for you to do this directly on XAML platform by using PlaneProjection from image control.
As the sample that #BarryWang pointed me to demonstrates it is necessary to apply a TransformMatrix to the page (or a parenting container) before executing the animation to get the 2.5D effect with rotation or other spatial transformation animations with the composition api.
private void UpdatePerspective()
{
Visual visual = ElementCompositionPreview.GetElementVisual(MainPanel);
// Get the size of the area we are enabling perspective for
Vector2 sizeList = new Vector2((float)MainPanel.ActualWidth, (float)MainPanel.ActualHeight);
// Setup the perspective transform.
Matrix4x4 perspective = new Matrix4x4(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, -1.0f / sizeList.X,
0.0f, 0.0f, 0.0f, 1.0f);
// Set the parent transform to apply perspective to all children
visual.TransformMatrix =
Matrix4x4.CreateTranslation(-sizeList.X / 2, -sizeList.Y / 2, 0f) * // Translate to origin
perspective * // Apply perspective at origin
Matrix4x4.CreateTranslation(sizeList.X / 2, sizeList.Y / 2, 0f); // Translate back to original position
}
I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.
How can I update Space in chupmunk My Code is:
// left
shape1 = cpSegmentShapeNew(edge, cpvzero, cpv(0.0f, size.height), 0.0f);
shape1->u = 0.1f; // minimal friction on the ground
shape1->e = 0.7f;
cpSpaceAddStaticShape(_space, shape1); // a body can be represented by multiple shapes
// top
shape2 = cpSegmentShapeNew(edge, cpvzero, cpv(size.width, 0.0f), 0.0f);
shape2->u = 0.1f;
shape2->e = 0.7f;
cpSpaceAddStaticShape(_space, shape2);
// right
shape3 = cpSegmentShapeNew(edge, cpv(size.width, 0.0f), cpv(size.width, size.height), 0.0f);
shape3->u = 0.1f;
shape3->e = 0.7f;
cpSpaceAddStaticShape(_space, shape3);
// bottom
shape4 = cpSegmentShapeNew(edge, cpv(0.0f, size.height), cpv(size.width, size.height), 0.0f);
shape4->u = 0.1f;
shape4->e = 0.7f;
cpSpaceAddStaticShape(_space, shape4);
if ball touch the Bottom shape like this ball go up but the bottom shape is remove and display green line that want i do but i dont know how to remove shape from body. anyone suggestions are welcome.
So three things.
1) The cpSpace[Add|Remove]StaticShape() functions are deprecated, and you should be using the cpSpace[Add|Remove]Shape() functions instead.
2) As the last answer stated, cpSpaceAddShape() will add a shape to a space. If you want to remove it you call cpSpaceRemoveShape(). There's really nothing more to it than that.
3) Chipmunk doesn't do any graphics, so if you want to draw a green line you need to use whatever functionality provided by your graphics or rendering library.
cpSpaceRemoveStaticShape(_space, shape4);
I am trying to draw a rotating cubic and add a spot light in a fixed position in front of this cubic. But because i set the wrong value in z-axis, the light don't show up. After i tried different position of light, the cubic finally displays as i wished. But i still don't know why this value works.
Here is the wrong code. But i think the value is reasonable.
Matrix.setIdentityM(mLightModelMatrix, 0);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, 0.5f);
Matrix.multiplyMV(mLightPosInWorldSpace, 0, mLightModelMatrix, 0, mLightPosInModelSpace, 0);
Matrix.multiplyMV(mLightPosInEyeSpace, 0, mViewMatrix, 0, mLightPosInWorldSpace, 0);
GLES20.glUniform3f(muLightPosHandler,
mLightPosInEyeSpace[0],
mLightPosInEyeSpace[1],
mLightPosInEyeSpace[2]);
Here is the right code. But i don't know why it works.
Matrix.setIdentityM(mLightModelMatrix, 0);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, 2.8f);
Matrix.multiplyMV(mLightPosInWorldSpace, 0, mLightModelMatrix, 0, mLightPosInModelSpace, 0);
Matrix.multiplyMV(mLightPosInEyeSpace, 0, mViewMatrix, 0, mLightPosInWorldSpace, 0);
GLES20.glUniform3f(muLightPosHandler,
mLightPosInEyeSpace[0],
mLightPosInEyeSpace[1],
mLightPosInEyeSpace[2]);
The difference between these two snippets is only the z-axis of light position. You can get all source code from here.
The reason i think 0.0f, 0.0f, 0.5f is a reasonable position is that the center point of the cubic front face is 0.0f, 0,0f, 0.5f before transformation. So it will give the cubic strongest light.
It's because you have a spot light which is defined by a light cone. At any distance before 2.8 the cone isn't wide enough to light the entire cube, as demonstrated by this picture:
To get the whole face of the cube to be lit you either need to move the light further away (as you have found) or widen the light cone.
With only two triangles per face you won't get much lighting in the first case as none of the vertices are lit. It's the light on the vertices that determine the light on the face. If you break the model down into more faces you'll see the effect.
I am trying to convert world space coordinates to screen space coordinates for a 2D game so I can work on collisions with the edges of the window.
The code I am using to convert world space coordinates to screen space is below
float centerX = boundingBox.m_center.x;
XMVECTORF32 centerVector = { centerX, 0.0f, 0.0f, 0.0f };
Windows::Foundation::Size screenSize = m_deviceResources->GetLogicalSize();
//the matrices are passed in transposed, so reverse it
DirectX::XMMATRIX projection = XMMatrixTranspose(XMLoadFloat4x4(&m_projectionMatrix));
DirectX::XMMATRIX view = XMMatrixTranspose(XMLoadFloat4x4(&m_viewMatrix));
worldMatrix = XMMatrixTranspose(worldMatrix);
XMVECTOR centerProjection = XMVector3Project(centerVector, 0.0f, 0.0f, screenSize.Width, screenSize.Height, 0.0f, 1.0f, projection, view, worldMatrix);
float centerScreenSpace = XMVectorGetX(centerProjection);
std::stringstream ss;
ss << "CENTER SCREEN SPACE X: " << centerScreenSpace << std::endl;
OutputDebugStringA(ss.str().c_str());
I have a window width 1262. The object is positioned at (0.0, 0.0, 0.0) and the current screen space X coordinate for this position is 631 which is correct. However, the problem occurs when I move the object towards the edges of the screen.
When I move the object to the left, the current screen space X coordinate for this position is 0.107788 when realistically it should be well above 0 as the center point is still on the screen and nowhere near the edge. The same happens on when moving the object to the right.
The screen size in pixels is correct, but something thinks that the screen has a premature cut-off like in the image below. It seems that the red dotted lines are the edges of the window when they're not. I can fix this by adding an additional offset but I don't believe that is the correct way to fix it and would like to know where I'm going wrong.
Does anyone know why the coordinates are incorrect?
Edit
World matrix is calculated for each object (black rectangle) in update using
XMMATRIX world = XMMatrixIdentity();
XMMATRIX scaleMatrix = XMMatrixScaling(m_scale.x, m_scale.y, m_scale.z);
XMMATRIX rotationMatrix = XMMatrixRotationY(0.0f);
XMMATRIX translationMatrix = XMMatrixTranslation(m_position.x, m_position.y, m_position.z);
world = XMMatrixTranspose(scaleMatrix * rotationMatrix * translationMatrix);
//update the object's world with new data
m_sprite->SetWorld(world);
I have a feeling this could be something to do with the projection matrix as I'm currently using perspective projection for 2d rendering. Although I should be using orthographic, surely this isn't a problem?