get transformation matrix from three known points - matrix

i have three 3d (x,y,z) points.
i get them by tracking the corners an object with a kinect.
i now want to translate and rotate a 3d model accordingly.
i get roll and pitch by doing this:
(i am using openframeworks.cc so some of the class methods might seem strange to people)
ofVec3f v10 = pointB - pointA;
ofVec3f v20 = pointC - pointA;
v10.normalize();
v20.normalize();
//create rotation matrix for roll+pitch relative to up vector 0,0,1
ofVec3f normaleVec = v10.crossed(v20);
ofVec3f fromVec = ofVec3f(0,0,1);
ofVec3f toVec = normaleVec;
mMR0.makeRotationMatrix(fromVec,toVec);
to get the heading / yaw i do this:
ofVec3f myV0_flat = avePointA*mMR0.getInverse();
ofVec3f myV1_flat = avePointB*mMR0.getInverse();
//get points relative to origion
ofVec3f myV10_flat = myV1_flat - myV0_flat;
//create rotation matrix for heading relative to flat 2d plane
float angle = atan2(myV10_flat.x,myV10_flat.y)/M_PI*180;
mMR1.makeRotationMatrix(angle,fromVec);
and finally create translation matrix and combine all the matrices:
mMT1.makeTranslationMatrix(avePointD); //translate from origin
ofMatrix4x4 mMc;
mMc = mMR0 * mMR1 * mMT1;
but when my 3d model rotates around it seems to dip always at the same angle.
my question is. how would i calculate roll and pitch separately, so i can where it dips and how to fix it.
thx.
s.

Related

Having a 3D point projected onto a 3D plane, find the 2D coord based on the plane two axis

I have a THREE.Plane plane which is intersected by a number of THREE.Line3 lines[].
Using only this information, how can I acquire a 2D coordinate set of points?
Edit for better understanding the problem:
The 2D coordinate is related to the plane, so imagine the 3D plane becomes a Cartesian plane drawn on a blackboard. It is pretty much a 3D drawing of a 2D plane. What I want to find is the X, Y values of points previously projected onto this Cartesian plane. But they are 3D, just like the 3D plane.
You don't have enough information. In this answer I'll explain why, and provide more information to achieve what you want, should you be able to provide the necessary information
First, let's create a plane. Like you, I'm uing Plane.setFromNormalAndCoplanarPoint. I'm considering the co-planar point as the origin ((0, 0)) of the plane's Cartesian space.
let normal = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let origin = new Vector3(Math.random(), Math.random(), Math.random()).normalize().setLength(10)
let plane = new Plane.setFromNormalAndCoplanarPoint(normal, origin)
Now, we create a random 3D point, and project it onto the plane.
let point1 = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let projectedPoint1 = new Vector3()
plane.projectPoint(point1, projectedPoint1)
The projectedPoint1 variable is now co-planar with your plane. But this plane is infinite, with no discrete X/Y axes. So currently we can only get the distance from the origin to the projected point.
let distance = origin.distanceTo(projectedPoint1)
In order to turn this into a Cartesian coordinate, you need to define at least one axis. To make this truly random, let's compute a random +Y axis:
let tempY = new Vector3(Math.random(), Math.random(), Math.random())
let pY = new Vector3()
plane.projectPoint(tempY, pY)
pY.normalize()
Now that we have +Y, let's get +X:
let pX = new Vector3().crossVectors(pY, normal)
pX.normalize()
Now, we can project the plane-projected point onto the axis vectors to get the Cartesian coordinates.
let x = projectedPoint1.clone().projectOnVector(pX).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pX).normalize().equals(pX)){
x = -x
}
let y = projectedPoint1.clone().projectOnVector(pY).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pY).normalize().equals(pY)){
y = -y
}
Note that in order to get negative values, I check a normalized copy of the axis-projected vector against the normalized axis vector. If they match, the value is positive. If they don't match, the value is negative.
Also, all the clone-ing I did above was to be explicit with the steps. This is not an efficient way to perform this operation, but I'll leave optimization up to you.
EDIT: My logic for determining the sign of the value was flawed. I've corrected the logic to normalize the projected point and check against the normalized axis vector.

Rotating an image matrix around its center in MATLAB

Assume I have a 2x2 matrix filled with values which will represent a plane. Now I want to rotate the plane around itself in a 3-D way, in the "z-Direction". For a better understanding, see the following image:
I wondered if this is possible by a simple affine matrix, thus I created the following simple script:
%Create a random value matrix
A = rand*ones(200,200);
%Make a box in the image
A(50:200-50,50:200-50) = 1;
Now I can apply transformations in the 2-D room simply by a rotation matrix like this:
R = affine2d([1 0 0; .5 1 0; 0 0 1])
tform = affine3d(R);
transformed = imwarp(A,tform);
However, this will not produce the desired output above, and I am not quite sure how to create the 2-D affine matrix to create such behavior.
I guess that a 3-D affine matrix can do the trick. However, if I define a 3-D affine matrix I cannot work with the 2-D representation of the matrix anymore, since MATLAB will throw the error:
The number of dimensions of the input image A must be 3 when the
specified geometric transformation is 3-D.
So how can I code the desired output with an affine matrix?
The answer from m3tho correctly addresses how you would apply the transformation you want: using fitgeotrans with a 'projective' transform, thus requiring that you specify 4 control points (i.e. 4 pairs of corresponding points in the input and output image). You can then apply this transform using imwarp.
The issue, then, is how you select these pairs of points to create your desired transformation, which in this case is to create a perspective projection. As shown below, a perspective projection takes into account that a viewing position (i.e. "camera") will have a given view angle defining a conic field of view. The scene is rendered by taking all 3-D points within this cone and projecting them onto the viewing plane, which is the plane located at the camera target which is perpendicular to the line joining the camera and its target.
Let's first assume that your image is lying in the viewing plane and that the corners are described by a normalized reference frame such that they span [-1 1] in each direction. We need to first select the degree of perspective we want by choosing a view angle and then computing the distance between the camera and the viewing plane. A view angle of around 45 degrees can mimic the sense of perspective of normal human sight, so using the corners of the viewing plane to define the edge of the conic field of view, we can compute the camera distance as follows:
camDist = sqrt(2)./tand(viewAngle./2);
Now we can use this to generate a set of control points for the transformation. We first apply a 3-D rotation to the corner points of the viewing plane, rotating around the y axis by an amount theta. This rotates them out of plane, so we now project the corner points back onto the viewing plane by defining a line from the camera through each rotated corner point and finding the point where it intersects the plane. I'm going to spare you the mathematical derivations (you can implement them yourself from the formulas in the above links), but in this case everything simplifies down to the following set of calculations:
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
And outP now contains your normalized set of control points in the output image. Given an image of size s, we can create a set of input and output control points as follows:
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
And you can apply the transformation like so:
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
outputView = imref2d(s);
newImage = imwarp(oldImage, tform, 'OutputView', outputView);
The only issue you may come across is that a rotation of 90 degrees (i.e. looking end-on at the image plane) would create a set of collinear points that would cause fitgeotrans to error out. In such a case, you would technically just want a blank image, because you can't see a 2-D object when looking at it edge-on.
Here's some code illustrating the above transformations by animating a spinning image:
img = imread('peppers.png');
s = size(img);
outputView = imref2d(s);
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
viewAngle = 45;
camDist = sqrt(2)./tand(viewAngle./2);
for theta = linspace(0, 360, 360)
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
spinImage = imwarp(img, tform, 'OutputView', outputView);
if (theta == 0)
hImage = image(spinImage);
set(gca, 'Visible', 'off');
else
set(hImage, 'CData', spinImage);
end
drawnow;
end
And here's the animation:
You can perform a projective transformation that can be estimated using the position of the corners in the first and second image.
originalP='peppers.png';
original = imread(originalP);
imshow(original);
s = size(original);
matchedPoints1 = [1 1;1 s(1);s(2) s(1);s(2) 1];
matchedPoints2 = [1 1;1 s(1);s(2) s(1)-100;s(2) 100];
transformType = 'projective';
tform = fitgeotrans(matchedPoints1,matchedPoints2,'projective');
outputView = imref2d(size(original));
Ir = imwarp(original,tform,'OutputView',outputView);
figure; imshow(Ir);
This is the result of the code above:
Original image:
Transformed image:

Camera Properties from Blender to generate Point Cloud

I used Blender to generate some color images and their corresponding depth map, along with their camera properties(intrinsic and extrinsic).
Then I want to use these information to generate a 3D point cloud from these 2D images using 2D to 3D projection techniques.
Here is the viewpoint of one the cameras in Blender.
I wanted to have the rotation and translation matrix of the camera.
I used the code in this link camera matrix for Blender written by #rfabbri and I used this method "get_3x4_RT_matrix_from_blender" to have the rotation matrix.
After that I want to do 2D to 3D projection with all of these information.
For 2D to 3D projection, I wrote the following code in Java:
static double[] projUVZtoXY( double u, double v, double d)
{
// "u" and "v" are the pixel number of 2D image and
// "d" is the depth of this pixel (distance of the point to camera)
double[] p = new double[]{u, v, 1};
double[] translate = calibStruct.getM_Trans(); // Translation Matrix, from **T_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] rotation = calibStruct.getM_RotMatrix(); // Rotation Matrix, from **R_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] K = calibStruct.getM_K(); // Intrinsic Matrix, from K matrix in **get_calibration_matrix_K_from_blender** method
double[][] invertR = invert33(rotation); // This method give me R^-1 matrix
double[][] invertK = invert33(K); // This method give me K^-1 matrix
double[][] invK_mul_depth = multiply33_scalar(invertK, d); // this method multiply scalar value of "d" to "invertK" matrix
double[] invK_mul_depth_p = multiply33_31(invK_mul_depth, p); // this method multiply 3*3 matrix of "invK_mul_depth" by 3*1 matrix of "p"
// subtract translation Matrix from the "invK_mul_depth_p" matrix
double[] d_InvK_p_trans = new double[]{invK_mul_depth_p[0] - translate[0],
invK_mul_depth_p[1] - translate[1],
invK_mul_depth_p[2] - translate[2]};
double[] xyz = multiply33_31(invertR, d_InvK_p_trans );
return xyz;
}
All the above code is trying to implement this 3D warping algorithm, to project uv pixel to XYZ 3D point.
But when I generate the 3D point cloud, it looks like this: (in Meshlab)
It is the point cloud of just one image, the following image:
I can't understand what happened here. And why in 3D point Cloud, all of the players in the image, are repeated in a line !
Could anyone guess what is happening?
I think maybe the Rotation Matrix that I get it from Blender, is not correct. What's your idea?
Thanks,
Mozhde

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

Openlayers-3 rotate Linestring geometry

Is there a way to rotate a generated linestring geometry around one of its points? I've built a length string that is pointing north (only adding length to the one co-oordinate) but I now need to rotate it to a given compass heading.
Geometry objects don't seem to have the ability to be rotated around a point (OL2 did?)
What can I do to rotate this geometry?
I eventually went with generating the geometry dynamically and solving pythagoras.
Given the length of the current linestring geometry segment and the angle in radians, I worked out how to offset the coordinates when extending the LineGeometry to correctly angle the segments.
calculateCoordinateOffset = function(length, angle) {
var _a = angle,
_l = length,
_x,
_y;
_x = _l * Math.sin(_a);
_y = _l * Math.cos(_a);
return [_x, _y];
};
The I add X and Y to the geometry coordinates of the last segment and add those coordinates onto the linestring geometry (addCoordinates()).
Any feedback would be good. My maths is traditionally VERY bad.

Resources