Convert Eular Angles from World Axises to Local - algorithm

I am trying to create a VR application using iPhone's motion manager object. By VR I mean an app to show places on camera.
I can successfully visualize iPhone orientation using yaw, pitch and roll with Z -> X -> Y rotational order.
Here is a picture of what I had done till now:
So I can rotate the device and it will do rotate correctly in my Windows app that I created for monitoring.
This is correct and I will show the code for it later. But this is not what I want to do.
What I want to do is actually opposite of this. I don't want device to move. I want the world around it to rotate. So if user point the device's camera to east, he should see the "east sign" and if he changed the direction to the north, he should see the "north sign" on the screen. But accurate. Not like other applications out there that removed the roll movement.
The problem is here that when I move the device from laying on the table to portrait mode, rotating to right and left result in incorrect rotation. And if I put it in landscape mode, then top to down rotation is incorrect. In other word, rotations are based on world axes and they only works correct when device in laying on the ground. Because it is the reference frame I think.
What I want to ask here is how to convert these angles so I see the result i expect. There should be a way based on trigonometry.
These are the function I use to calculate the rotation matrix:
private static Matrix4 CreateRotationMatrix(char axis, float radians, bool rightHanded = true)
{
float c = (float)Math.Cos(radians);
float s = (float)Math.Sin(radians) * (rightHanded ? 1 : -1);
switch (axis)
{
case 'X':
return new Matrix4(
new Vector4(1.0f, 0.0f, 0.0f, 0.0f),
new Vector4(0.0f, c, -s, 0.0f),
new Vector4(0.0f, s, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Y':
return new Matrix4(
new Vector4(c, 0.0f, s, 0.0f),
new Vector4(0.0f, 1.0f, 0.0f, 0.0f),
new Vector4(-s, 0.0f, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Z':
return new Matrix4(
new Vector4(c, -s, 0.0f, 0.0f),
new Vector4(s, c, 0.0f, 0.0f),
new Vector4(0.0f, 0.0f, 1.0f, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
default:
return Matrix4.Identity;
}
}
public static Matrix4 MatrixFromEulerAngles(
Vector3 euler,
string order,
bool isRightHanded = true,
bool isIntrinsic = true)
{
if (order.Length != 3) throw new ArgumentOutOfRangeException("order", "String must have exactly 3 charecters");
// X = Pitch
// Y = Yaw
// Z = Roll
return isIntrinsic
? CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
: CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded);
}
private static float GetEulerAngle(char angle, Vector3 euler)
{
switch (angle)
{
case 'X':
return euler.X;
case 'Y':
return euler.Y;
case 'Z':
return euler.Z;
default:
return 0f;
}
}
And this is how I apply the matrix to OpenGL:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", true, true);
GL.LoadMatrix(ref projectionMatrix);

Ok, inverting was part of the answer. It helped a lot. Thanks #dari and #Spektre for the suggestion.
But the complete answer was, change the rotation direction (right-handed -> left-handed), changing the rotation order (YXZ -> ZXY, in other word, Intrinsic to Extrinsic) and Inverting the result matrix.
Before asking, I had tried all of these three alone, but never thought of using them all together.
So:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", false, false);
projectionMatrix.Invert();
GL.LoadMatrix(ref projectionMatrix);

Related

glm::orto in Vulkan

I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.

Point cloud rendered only partially

I only get a partial point cloud of the room. Other parts of the room do not get rendered at all. It only sees a part to the left. I am using the Point Cloud prefab in Unity. When I use one of the apps, such as Room Scanner or Explorer, I get the rest of the room. I intend to modify the pre-fab for my application but so far I get that limited view. I am using Unity 5.3.3 on Windows 10 on a 64.
set the unity camera aligned with the depth camera frame
so for the matrix dTuc
dTuc = imuTd.inverse * imuTdepth * depthTuc
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Get the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTd = poseData.ToMatrix4x4();
// Get the transformation of IMU frame with respect to depth camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_DEPTH;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTdepth = poseData.ToMatrix4x4();
// Get the transform of the Unity Camera frame with respect to the depth Camera frame.
Matrix4x4 depthTuc = new Matrix4x4();
depthTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
depthTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
depthTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
depthTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
m_dTuc = Matrix4x4.Inverse(imuTd) * imuTdepth * depthTuc;

XMVector3Project unexpected behaviour

I'm trying to figure out World space to Screen space transform. As I understand, in D3D11, function XMVector3Project should handle this. However, when I use it like this:
XMVECTOR eye = XMVectorSet(10000, 0.0f, 1.5f, 0.0f);
XMVECTOR at = XMVectorSet(10000, 0.0f, 0.0f, 0.0f);
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
auto viewMatrix = XMMatrixTranspose(XMMatrixLookAtRH(eye2, at2, up2));
XMVECTOR vec = XMVector3Project(XMVectorSet(0.0, 0.0, 0.0, 1.0f), 0, 0, 480, 800, 0, 1, XMMatrixIdentity(), viewMatrix, XMMatrixIdentity());
it returns point (240, 480). I don't understand how that's possible, cause even with no Projection matrix, when I set view matrix to show point (1000, 1000, x), Point (0,0,0) shouldn't show on screen at all.
That's just my view, probably wrong, so I would like to know how is that intended behaviour?
I think the problem here is your use of XMMatrixTranspose. DirectXMath (aka XNAMath version 3 aka xboxmath) functions are all written assuming you have row-major matrices either left-handed or right-handed. By applying the XMMatrixTranspose to the lookat matrix, you are making it column-major. While this is commonly done as a last step before setting it into a Constant Buffer for consumption by HLSL (see MSDN DirectXMath Programmer's Guide and MSDN HLSL docs for details), the result doesn't make sense to use this way with XMVector3Project.
BTW, I'm assuming your use of XMVectorSet here is just for testing, but the efficient way to code a constant XMVECTOR is using XMVECTORF32.
static const XMVECTORF32 eye = { 10000, 0.0f, 1.5f, 0.0f };
static const XMVECTORF32 at = { 10000, 0.0f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 1.0f, 0.0f, 0.0f, 0.0f };

How to position a textured quad in screen coordinates?

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

OpenGL ES glRotatef performing shear instead of rotate?

I am able to draw a sprite on the screen of an iPhone, but when I try to rotate it I am getting some weird results. It seems to be stretching the sprite in the y direction more the closer the sprite gets to pointing down the y-axis (90 and 270 degrees). It displays correctly when pointing down the x and -x axes (0 and 180 degrees). It is basically like it is shearing instead of rotating. Here are the essentials of the code (projection matrix is ortho):
glPushMatrix();
glLoadIdentity();
glTranslatef( position.x, position.y, -1.0f );
glRotatef( rotation, 0.0f, 0.0f, 1.0f );
glScalef( halfSize.x, halfSize.y, 1.0f );
vertices[0] = 1.0f;
vertices[1] = 1.0f;
vertices[2] = 0.0f;
vertices[3] = 1.0f;
vertices[4] = -1.0f;
vertices[5] = 0.0f;
vertices[6] = -1.0f;
vertices[7] = 1.0f;
vertices[8] = 0.0f;
vertices[9] = -1.0f;
vertices[10] = -1.0f;
vertices[11] = 0.0f;
glVertexPointer( 3, GL_FLOAT, 0, vertices );
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4 );
glPopMatrix();
Can anybody explain to me how to fix this please?
halfsize is just half the x and y extent of the sprite; removing the glScalef call does not make any difference.
Here is my matrix setup:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, 320, 480, 0, 0.01, 5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
OK, hopefully this screenshot will demonstrate what's happening:
If you are scaling by the same amount in the x and y directions, then your projection is causing the distortion.
Just a hunch, but maybe try swapping the 320 and 480 in your Ortho projection. (In case the X and Y on the iPhone is swapped)

Resources