Chipmunk: How to Update shape? - chipmunk

How can I update Space in chupmunk My Code is:
// left
shape1 = cpSegmentShapeNew(edge, cpvzero, cpv(0.0f, size.height), 0.0f);
shape1->u = 0.1f; // minimal friction on the ground
shape1->e = 0.7f;
cpSpaceAddStaticShape(_space, shape1); // a body can be represented by multiple shapes
// top
shape2 = cpSegmentShapeNew(edge, cpvzero, cpv(size.width, 0.0f), 0.0f);
shape2->u = 0.1f;
shape2->e = 0.7f;
cpSpaceAddStaticShape(_space, shape2);
// right
shape3 = cpSegmentShapeNew(edge, cpv(size.width, 0.0f), cpv(size.width, size.height), 0.0f);
shape3->u = 0.1f;
shape3->e = 0.7f;
cpSpaceAddStaticShape(_space, shape3);
// bottom
shape4 = cpSegmentShapeNew(edge, cpv(0.0f, size.height), cpv(size.width, size.height), 0.0f);
shape4->u = 0.1f;
shape4->e = 0.7f;
cpSpaceAddStaticShape(_space, shape4);
if ball touch the Bottom shape like this ball go up but the bottom shape is remove and display green line that want i do but i dont know how to remove shape from body. anyone suggestions are welcome.

So three things.
1) The cpSpace[Add|Remove]StaticShape() functions are deprecated, and you should be using the cpSpace[Add|Remove]Shape() functions instead.
2) As the last answer stated, cpSpaceAddShape() will add a shape to a space. If you want to remove it you call cpSpaceRemoveShape(). There's really nothing more to it than that.
3) Chipmunk doesn't do any graphics, so if you want to draw a green line you need to use whatever functionality provided by your graphics or rendering library.

cpSpaceRemoveStaticShape(_space, shape4);

Related

glm::orto in Vulkan

I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.

calculate look at matrix on terrain

hoping someone can help me. I have a go kart on a terrain, from which I get the position of the centre of the go kart, the left front wheel and the right front wheel. What I need is a matrix to rotate the kart so it sits flat on the terrain.
All vector3's. Forward is direction vector.
So far I have this but It's not working as expected
D3DXVECTOR3 cross, cross1;
cross = kartposition - Lwheelposition;
cross1 = kartposition - Rwheelposition;
D3DXVec3Cross(&up, &cross, &cross1);
D3DXVec3Normalize(&up, &up);
D3DXVec3Cross(&right, &forward,&up);
D3DXVec3Normalize(&right, &right);
D3DXVec3Cross(&forward, &up,&right);
D3DXVec3Normalize(&forward, &forward);
D3DXMatrixLookAtLH(&localtransform[0], &D3DXVECTOR3(0, 0, 0) ,&(forward), &up);
Any advise would be great.
Solved I believe.
Have change 2 lines to
cross = Lwheelposition-kartposition ;
cross1 =Rwheelposition - kartposition ;
and added to the end
D3DXMatrixInverse(&localtransform[0],NULL,&localtransform[0]);
to invert the action of the look at function.
I have since revised and solved this by referring to
(DirectX) Generating a rotation matrix to match a vector
Final code is as follows.
D3DXVECTOR3 cross, cross1;
cross = Lwheelposition - kartposition;
cross1 = Rwheelposition - kartposition;
D3DXVec3Cross(&up, &cross, &cross1);
D3DXVec3Normalize(&up, &up);
D3DXVec3Cross(&right, &up,&forward);
D3DXVec3Normalize(&right, &right);
D3DXVec3Cross(&kartforward, &right, &up);
localtransform[0]=D3DXMATRIX (right.x, right.y, right.z, 0.0f,
up.x, up.y, up.z, 0.0f,
kartforward.x, kartforward.y, kartforward.z, 0.0f,
kartposition.x, kartposition.y, kartposition.z, 1.0f);
Hope this is helpful.

Point cloud rendered only partially

I only get a partial point cloud of the room. Other parts of the room do not get rendered at all. It only sees a part to the left. I am using the Point Cloud prefab in Unity. When I use one of the apps, such as Room Scanner or Explorer, I get the rest of the room. I intend to modify the pre-fab for my application but so far I get that limited view. I am using Unity 5.3.3 on Windows 10 on a 64.
set the unity camera aligned with the depth camera frame
so for the matrix dTuc
dTuc = imuTd.inverse * imuTdepth * depthTuc
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Get the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTd = poseData.ToMatrix4x4();
// Get the transformation of IMU frame with respect to depth camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_DEPTH;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTdepth = poseData.ToMatrix4x4();
// Get the transform of the Unity Camera frame with respect to the depth Camera frame.
Matrix4x4 depthTuc = new Matrix4x4();
depthTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
depthTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
depthTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
depthTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
m_dTuc = Matrix4x4.Inverse(imuTd) * imuTdepth * depthTuc;

How to position a textured quad in screen coordinates?

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

XMVector3Project giving offset values when object is moved in world space

I am trying to convert world space coordinates to screen space coordinates for a 2D game so I can work on collisions with the edges of the window.
The code I am using to convert world space coordinates to screen space is below
float centerX = boundingBox.m_center.x;
XMVECTORF32 centerVector = { centerX, 0.0f, 0.0f, 0.0f };
Windows::Foundation::Size screenSize = m_deviceResources->GetLogicalSize();
//the matrices are passed in transposed, so reverse it
DirectX::XMMATRIX projection = XMMatrixTranspose(XMLoadFloat4x4(&m_projectionMatrix));
DirectX::XMMATRIX view = XMMatrixTranspose(XMLoadFloat4x4(&m_viewMatrix));
worldMatrix = XMMatrixTranspose(worldMatrix);
XMVECTOR centerProjection = XMVector3Project(centerVector, 0.0f, 0.0f, screenSize.Width, screenSize.Height, 0.0f, 1.0f, projection, view, worldMatrix);
float centerScreenSpace = XMVectorGetX(centerProjection);
std::stringstream ss;
ss << "CENTER SCREEN SPACE X: " << centerScreenSpace << std::endl;
OutputDebugStringA(ss.str().c_str());
I have a window width 1262. The object is positioned at (0.0, 0.0, 0.0) and the current screen space X coordinate for this position is 631 which is correct. However, the problem occurs when I move the object towards the edges of the screen.
When I move the object to the left, the current screen space X coordinate for this position is 0.107788 when realistically it should be well above 0 as the center point is still on the screen and nowhere near the edge. The same happens on when moving the object to the right.
The screen size in pixels is correct, but something thinks that the screen has a premature cut-off like in the image below. It seems that the red dotted lines are the edges of the window when they're not. I can fix this by adding an additional offset but I don't believe that is the correct way to fix it and would like to know where I'm going wrong.
Does anyone know why the coordinates are incorrect?
Edit
World matrix is calculated for each object (black rectangle) in update using
XMMATRIX world = XMMatrixIdentity();
XMMATRIX scaleMatrix = XMMatrixScaling(m_scale.x, m_scale.y, m_scale.z);
XMMATRIX rotationMatrix = XMMatrixRotationY(0.0f);
XMMATRIX translationMatrix = XMMatrixTranslation(m_position.x, m_position.y, m_position.z);
world = XMMatrixTranspose(scaleMatrix * rotationMatrix * translationMatrix);
//update the object's world with new data
m_sprite->SetWorld(world);
I have a feeling this could be something to do with the projection matrix as I'm currently using perspective projection for 2d rendering. Although I should be using orthographic, surely this isn't a problem?

Resources