Point cloud rendered only partially - google-project-tango

I only get a partial point cloud of the room. Other parts of the room do not get rendered at all. It only sees a part to the left. I am using the Point Cloud prefab in Unity. When I use one of the apps, such as Room Scanner or Explorer, I get the rest of the room. I intend to modify the pre-fab for my application but so far I get that limited view. I am using Unity 5.3.3 on Windows 10 on a 64.

set the unity camera aligned with the depth camera frame
so for the matrix dTuc
dTuc = imuTd.inverse * imuTdepth * depthTuc
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Get the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTd = poseData.ToMatrix4x4();
// Get the transformation of IMU frame with respect to depth camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_DEPTH;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Matrix4x4 imuTdepth = poseData.ToMatrix4x4();
// Get the transform of the Unity Camera frame with respect to the depth Camera frame.
Matrix4x4 depthTuc = new Matrix4x4();
depthTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
depthTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
depthTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
depthTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
m_dTuc = Matrix4x4.Inverse(imuTd) * imuTdepth * depthTuc;

Related

Does the Windows Composition API support 2.5D projected rotation?

I have started to use the Windows Composition API in UWP applications to animate elements of the UI.
Visual elements expose RotationAngleInDegrees and RotationAngle properties as well as a RotationAxis property.
When I animate a rectangular object's RotationAngleInDegrees value around the Y axis, the rectangle rotates as I would expect but in a 2D application window, it does not appear to be displaying with a 2.5D projection.
Is there a way to get the 2.5D projection effect on rotations with the composition api?
It depends to the effect that you want to have. There is a fluent design app sample on GitHub and here is the link. You will be able to download the demo from the store. And you can get some idea from depth samples. For example, flip to reveal shows a way to rotate a image card and you can find source code from here. For more details please check the sample and the demo.
In general, the animation is to rotate based on X axis:
rectanglevisual.RotationAxis = new System.Numerics.Vector3(1f, 0f, 0f);
And then use rotate animation to rotate based on RotationAngleInDegrees.
It is also possible for you to do this directly on XAML platform by using PlaneProjection from image control.
As the sample that #BarryWang pointed me to demonstrates it is necessary to apply a TransformMatrix to the page (or a parenting container) before executing the animation to get the 2.5D effect with rotation or other spatial transformation animations with the composition api.
private void UpdatePerspective()
{
Visual visual = ElementCompositionPreview.GetElementVisual(MainPanel);
// Get the size of the area we are enabling perspective for
Vector2 sizeList = new Vector2((float)MainPanel.ActualWidth, (float)MainPanel.ActualHeight);
// Setup the perspective transform.
Matrix4x4 perspective = new Matrix4x4(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, -1.0f / sizeList.X,
0.0f, 0.0f, 0.0f, 1.0f);
// Set the parent transform to apply perspective to all children
visual.TransformMatrix =
Matrix4x4.CreateTranslation(-sizeList.X / 2, -sizeList.Y / 2, 0f) * // Translate to origin
perspective * // Apply perspective at origin
Matrix4x4.CreateTranslation(sizeList.X / 2, sizeList.Y / 2, 0f); // Translate back to original position
}

OpenGL Matrix scale then Translate is still scaling my position

I am trying to position my text model mesh on screen. Using the code below, it draws mesh as the code suggests; with the left of the mesh at the center of the screen. But, I would like to position it at the left of edge of the screen, and this is where I get stuck. If I un-comment the Matrix.translateM line, I would think the position will now be at the left of the screen, but it seems that the position is being scaled (!?)
A few scenarios I have tried:
a.) Matrix.scaleM only (no Matrix.translateM) = the left of the mesh is positioned 0.0f (center of screen), has correct scale.
b.) Matrix.TranslateM only (no Matrix.scaleM) = the left of the mesh is positioned -1.77f at the left of screen correctly, but scale incorrect.
c.) Matrix.TranslateM then Matrix.scaleM, or Matrix.scaleM then Matrix.TranslateM = the scale is correct, but position incorrect. It seems the position is scaled and is very much closer to the center than to the left of the screen.
I am using OpenGL ES 2.0 in Android Studio programming in Java.
Screen bounds (as setup from Matrix.orthoM)
left: -1.77, right: 1.77 (center is 0.0), top: -1.0, bottom: 1.0 (center is 0.0)
Mesh height is 1.0f, so if no Matrix.scaleM, the mesh takes the entire screen height.
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f; // 64px height to projection matrix
Matrix.setIdentityM(modelMatrix, 0);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale); // these two lines
//Matrix.translateM(modelMatrix, 0, -ratio, 0.0f, 0.0f); // these two lines
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, modelMatrix, 0);
Thanks, Ed Halferty and Matic Oblak, you are both correct. As Matic suggested, I have now put the Matrix.TranslateM first, then Matrix.scaleM second. I have also ensured that the MVPMatrix is indeed modelviewprojection, and not projectionviewmodel.
Also, now with Matrix.translateM for the model mesh to -1.0f, it is to the left edge of the screen, which is better than -1.77f in any case.
Correct position + scale, thanks!
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f;
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, -1.0f, 0.0f, 0.0f);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale);
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, modelMatrix, 0, mMVPMatrix, 0);

Convert Eular Angles from World Axises to Local

I am trying to create a VR application using iPhone's motion manager object. By VR I mean an app to show places on camera.
I can successfully visualize iPhone orientation using yaw, pitch and roll with Z -> X -> Y rotational order.
Here is a picture of what I had done till now:
So I can rotate the device and it will do rotate correctly in my Windows app that I created for monitoring.
This is correct and I will show the code for it later. But this is not what I want to do.
What I want to do is actually opposite of this. I don't want device to move. I want the world around it to rotate. So if user point the device's camera to east, he should see the "east sign" and if he changed the direction to the north, he should see the "north sign" on the screen. But accurate. Not like other applications out there that removed the roll movement.
The problem is here that when I move the device from laying on the table to portrait mode, rotating to right and left result in incorrect rotation. And if I put it in landscape mode, then top to down rotation is incorrect. In other word, rotations are based on world axes and they only works correct when device in laying on the ground. Because it is the reference frame I think.
What I want to ask here is how to convert these angles so I see the result i expect. There should be a way based on trigonometry.
These are the function I use to calculate the rotation matrix:
private static Matrix4 CreateRotationMatrix(char axis, float radians, bool rightHanded = true)
{
float c = (float)Math.Cos(radians);
float s = (float)Math.Sin(radians) * (rightHanded ? 1 : -1);
switch (axis)
{
case 'X':
return new Matrix4(
new Vector4(1.0f, 0.0f, 0.0f, 0.0f),
new Vector4(0.0f, c, -s, 0.0f),
new Vector4(0.0f, s, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Y':
return new Matrix4(
new Vector4(c, 0.0f, s, 0.0f),
new Vector4(0.0f, 1.0f, 0.0f, 0.0f),
new Vector4(-s, 0.0f, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Z':
return new Matrix4(
new Vector4(c, -s, 0.0f, 0.0f),
new Vector4(s, c, 0.0f, 0.0f),
new Vector4(0.0f, 0.0f, 1.0f, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
default:
return Matrix4.Identity;
}
}
public static Matrix4 MatrixFromEulerAngles(
Vector3 euler,
string order,
bool isRightHanded = true,
bool isIntrinsic = true)
{
if (order.Length != 3) throw new ArgumentOutOfRangeException("order", "String must have exactly 3 charecters");
// X = Pitch
// Y = Yaw
// Z = Roll
return isIntrinsic
? CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
: CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded);
}
private static float GetEulerAngle(char angle, Vector3 euler)
{
switch (angle)
{
case 'X':
return euler.X;
case 'Y':
return euler.Y;
case 'Z':
return euler.Z;
default:
return 0f;
}
}
And this is how I apply the matrix to OpenGL:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", true, true);
GL.LoadMatrix(ref projectionMatrix);
Ok, inverting was part of the answer. It helped a lot. Thanks #dari and #Spektre for the suggestion.
But the complete answer was, change the rotation direction (right-handed -> left-handed), changing the rotation order (YXZ -> ZXY, in other word, Intrinsic to Extrinsic) and Inverting the result matrix.
Before asking, I had tried all of these three alone, but never thought of using them all together.
So:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", false, false);
projectionMatrix.Invert();
GL.LoadMatrix(ref projectionMatrix);

XMVector3Project unexpected behaviour

I'm trying to figure out World space to Screen space transform. As I understand, in D3D11, function XMVector3Project should handle this. However, when I use it like this:
XMVECTOR eye = XMVectorSet(10000, 0.0f, 1.5f, 0.0f);
XMVECTOR at = XMVectorSet(10000, 0.0f, 0.0f, 0.0f);
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
auto viewMatrix = XMMatrixTranspose(XMMatrixLookAtRH(eye2, at2, up2));
XMVECTOR vec = XMVector3Project(XMVectorSet(0.0, 0.0, 0.0, 1.0f), 0, 0, 480, 800, 0, 1, XMMatrixIdentity(), viewMatrix, XMMatrixIdentity());
it returns point (240, 480). I don't understand how that's possible, cause even with no Projection matrix, when I set view matrix to show point (1000, 1000, x), Point (0,0,0) shouldn't show on screen at all.
That's just my view, probably wrong, so I would like to know how is that intended behaviour?
I think the problem here is your use of XMMatrixTranspose. DirectXMath (aka XNAMath version 3 aka xboxmath) functions are all written assuming you have row-major matrices either left-handed or right-handed. By applying the XMMatrixTranspose to the lookat matrix, you are making it column-major. While this is commonly done as a last step before setting it into a Constant Buffer for consumption by HLSL (see MSDN DirectXMath Programmer's Guide and MSDN HLSL docs for details), the result doesn't make sense to use this way with XMVector3Project.
BTW, I'm assuming your use of XMVectorSet here is just for testing, but the efficient way to code a constant XMVECTOR is using XMVECTORF32.
static const XMVECTORF32 eye = { 10000, 0.0f, 1.5f, 0.0f };
static const XMVECTORF32 at = { 10000, 0.0f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 1.0f, 0.0f, 0.0f, 0.0f };

Opengl es render to textture appears upside down

Using frame buffer
When rendering the texture appears upsidedown, here are vertices and texture coordinate.
By the way rendering without creating frame buffer renders the texture correctely
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
Any help please ...
thanks
When uploading 2D texture images to OpenGL, it expects the data to be specified from bottom to top, even though usually images are in memory from top to bottom. You seem to have inverted your texture coordinates to work around this problem.
You should instead flip the texture data before uploading it to OpenGL and keep your texture coordinates intact. If you do that, the same texture coordinates work for both image and FBO textures.
So the solution is to flip the bitmap before calling GLUtils.texImage2D and to write your vertices as
private final float[] mVerticesData =
{
-1f, 1f, 0.0f, // Position 0
0.0f, 1.0f, // TexCoord 0
-1f, -1f, 0.0f, // Position 1
0.0f, 0.0f, // TexCoord 1
1f, -1f, 0.0f, // Position 2
1.0f, 0.0f, // TexCoord 2
1f, 1f, 0.0f, // Position 3
1.0f, 1.0f // TexCoord 3
};
By the way rendering without creating frame buffer renders the texture correctely
I think it actually doesn't. With all transformations set to identity and texture coordinates matching vertex coordinates, i.e. S=X, T=Y, OpenGL assumes the origin of texture data to be in the lower left (with the noteable exception of cube maps, which are different beasts). Framebuffer color attachments, in your case your texture, agree upon that convention.
Your texture T coordinates are antiparallel to the Y vertex coordinates, which means in the case of an all identity transformation setup you flip it "upside down".
However most image file formats assume the origin in the upper left and if you upload such data as is to a OpenGL texture this adds another flip, and together with your texture coordinate flip both cancel out.
So it's very likely, that in face your regular texture code path is "flipped".

Resources