How do I rotate in object space in 3D (using Matrixes) - matrix

what I am trying to do is to set up functions that can perform global and object space rotations, but am having problems understand how to go about object space rotations, as just multiplying a point by the rotation only works for global space, so my idea was to build the rotation in object space, then multiply it by the inverse of the objects matrix, supposedly taking away all the excess rotation between object and global space, so still maintaining the object space rotation, but in global values, I was wrong in this logic, as it did not work, here is my code, if you want to inspect it, all functions it calls have been tested to work:
// build object space rotation
sf::Vector3<float> XMatrix (MultiplyByMatrix(sf::Vector3<float> (cosz,sinz,0)));
sf::Vector3<float> YMatrix (MultiplyByMatrix(sf::Vector3<float> (-sinz,cosz,0)));
sf::Vector3<float> ZMatrix (MultiplyByMatrix(sf::Vector3<float> (0,0,1)));
// build cofactor matrix
sf::Vector3<float> InverseMatrix[3];
CoFactor(InverseMatrix);
// multiply by the transpose of the cofactor matrix(the adjoint), to bring the rotation to world space coordinates
sf::Vector3<float> RelativeXMatrix = MultiplyByTranspose(XMatrix, InverseMatrix[0], InverseMatrix[1], InverseMatrix[2]);
sf::Vector3<float> RelativeYMatrix = MultiplyByTranspose(YMatrix, InverseMatrix[0], InverseMatrix[1], InverseMatrix[2]);
sf::Vector3<float> RelativeZMatrix = MultiplyByTranspose(ZMatrix, InverseMatrix[0], InverseMatrix[1], InverseMatrix[2]);
// perform the rotation from world space
PointsPlusMatrix(RelativeXMatrix, RelativeYMatrix, RelativeZMatrix);

The difference between rotation in world-space and object-space is where you apply the rotation matrix.
The usual way computer graphics uses matrices is to map vertex points:
from object-space, (multiply by MODEL matrix to transform)
into world-space, (then multiply by VIEW matrix to transform)
into camera-space, (then multiply by PROJECTION matrix to transform)
into projection-, or "clip"- space
Specifically, suppose points are represented as column vectors; then, you transform a point by left-multiplying it by a transformation matrix:
world_point = MODEL * model_point
camera_point = VIEW * world_point = (VIEW*MODEL) * model_point
clip_point = PROJECTION * camera_point = (PROJECTION*VIEW*MODEL) * model_point
Each of these transformation matrices may itself be the result of multiple matrices multiplied in sequence. In particular, the MODEL matrix is often composed of a sequence of rotations, translations, and scalings, based on a hierarchical articulated model, e.g.:
MODEL = STAGE_2_WORLD * BODY_2_STAGE *
SHOULDER_2_BODY * UPPERARM_2_SHOULDER *
FOREARM_2_UPPERARM * HAND_2_FOREARM
So, whether you are rotating in model-space or world-space depends on which side of the MODEL matrix you apply your rotation matrix. Of course, you can easily do both:
MODEL = WORLD_ROTATION * OLD_MODEL * OBJECT_ROTATION
In this case, WORLD_ROTATION rotates about the center of world-space, while OBJECT_ROTATION rotates about the center of object-space.

Related

Rotation of an object in the tangent space of a globe

Given the two following inputs:
a point on a sphere (like an observer on Earth);
and the world matrix of an object in space (the position and attitude of a satellite),
how to get the azimuth and elevation of the object in the tangent space of the point on the sphere (the elevation and azimuth of where the observer should look at)? In particular, when the object is exactly at the zenith, the yaw rotation (rotation around the vertical axis) should account for the azimuth (so that, though the observer is looking straight up, his shoulders would be facing the same azimuth as the object).
The math I've tried so far is:
to put the satellite in tangent space (multiplying its world matrix with the inverse of the matrix of the tangent space on the globe). Or the same with quaternions. An euler rotation is then deduced from the resulting matrix (or the resulting quaternion), with a "ZXY" priority, and the Z and X are interpreted as azimuth and elevation. But this gives incorrect numbers, as part of the rotation seems often interpreted as roll (Y axis rotation) which I want to be zero.
an intuitive approach also is to compute the angle between the vector of the observer to the object's position, with the vertical axis, to deduce the elevation; whereas the azimuth is given by the angle between the tangent north and the projected position of the object on the "tangent ground" (plus some more math to hone this particular deduction). But this approach does not work for the case of the object at the zenith.
Resources exist online but not with these specific inputs and the necessity of supporting the zenith case.
Incidentally the program is in typescript for three.js, and so the code goes as follows for the first solution described above:
function getRotationAtPoint(
object: THREE.Object3D,
point: THREE.Vector3
): { azimuth: number, elevation: number } {
// 1. Get the matrix of the tangent space of the observer.
const tangentSpaceMatrix = new THREE.Matrix4();
const baseTangentSpaceAxes = getBaseTangentAxesOnSphere(point);
tangentSpaceMatrix.makeBasis(...baseTangentSpaceAxes);
// 2. Tranform the object's matrix in tangent space of observer.
const inverseMatrix = new THREE.Matrix4().getInverse(tangentSpaceMatrix);
const objectMatrix = object.matrixWorld.clone().multiply(inverseMatrix);
// 3. Get the angles.
const euler = new THREE.Euler().setFromRotationMatrix(objectMatrix);
return {
azimuth: euler.z,
elevation: euler.x
};
}
Also, Three.js offers references to the up axis of THREE.Object3D instances, however the program I deal with computes everything directly into the objects' matrices and the up axis can't be trusted.

PIXI.js - Canvas Coordinate to Container Coordinate

I have initiated a PIXI js canvas:
g_App = new PIXI.Application(800, 600, { backgroundColor: 0x1099bb });
Set up a container:
container = new PIXI.Container();
g_App.stage.addChild(container);
Put a background texture (2000x2000) into the container:
var texture = PIXI.Texture.fromImage('picBottom.png');
var back = new PIXI.Sprite(texture);
container.addChild(back);
Set the global:
var g_Container = container;
I do various pivot points and rotations on container and canvas stage element:
// Set the focus point of the container
g_App.stage.x = Math.floor(400);
g_App.stage.y = Math.floor(500); // Note this one is not central
g_Container.pivot.set(1000, 1000);
g_Container.rotation = 1.5; // radians
Now I need to be able to convert a canvas pixel to the pixel on the background texture.
g_Container has an element transform which in turn has several elements localTransform, pivot, position, scale ands skew. Similarly g_App.stage has the same transform element.
In Maths this is simple, you just have vector point and do matix operations on them. Then to go back the other way you just find inverses of those matrices and multiply backwards.
So what do I do here in pixi.js?
How do I convert a pixel on the canvas and see what pixel it is on the background container?
Note: The following is written using the USA convention of using matrices. They have row vectors on the left and multiply them by the matrix on the right. (Us pesky Brits in the UK do the opposite. We have column vectors on the right and multiply it by the matrix on the left. This means UK and USA matrices to do the same job will look slightly different.)
Now I have confused you all, on with the answer.
g_Container.transform.localTransform - this matrix takes the world coords to the scaled/transposed/rotated COORDS
g_App.stage.transform.localTransform - this matrix takes the rotated world coords and outputs screen (or more accurately) html canvas coords
So for example the Container matrix is:
MatContainer = [g_Container.transform.localTransform.a, g_Container.transform.localTransform.b, 0]
[g_Container.transform.localTransform.c, g_Container.transform.localTransform.d, 0]
[g_Container.transform.localTransform.tx, g_Container.transform.localTransform.ty, 1]
and the rotated container matrix to screen is:
MatToScreen = [g_App.stage.transform.localTransform.a, g_App.stage.transform.localTransform.b, 0]
[g_App.stage.transform.localTransform.c, g_App.stage.transform.localTransform.d, 0]
[g_App.stage.transform.localTransform.tx, g_App.stage.transform.localTransform.ty, 1]
So to get from World Coordinates to Screen Coordinates (noting our vector will be a row on the left, so the first operation matrix that acts first on the World coordinates must also be on the left), we would need to multiply the vector by:
MatAll = MatContainer * MatToScreen
So if you have a world coordinate vector vectWorld = [worldX, worldY, 1.0] (I'll explain the 1.0 at the end), then to get to the screen coords you would do the following:
vectScreen = vectWorld * MatAll
So to get screen coords and to get to world coords we first need to calculate the inverse matrix of MatAll, call it invMatAll. (There are loads of places that tell you how to do this, so I will not do it here.)
So if we have screen (canvas) coordinates screenX and screenY, we need to create a vector vectScreen = [screenX, screenY, 1.0] (again I will explain the 1.0 later), then to get to world coordinates worldX and worldY we do:
vectWorld = vectScreen * invMatAll
And that is it.
So what about the 1.0?
In a 2D system you can do rotations, scaling with 2x2 matrices. Unfortunately you cannot do a 2D translations with a 2x2 matrix. Consequently you need 3x3 matrices to fully describe all 2D scaling, rotations and translations. This means you need to make your vector 3D as well, and you need to put a 1.0 in the third position in order to do the translations properly. This 1.0 will also be 1.0 after any matrix operation as well.
Note: If we were working in a 3D system we would need 4x4 matrices and put a dummy 1.0 in our 4D vectors for exactly the same reasons.

Finding translation and scale on two sets of points to get least square error in their distance?

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github

Convert OpenGL 4x4 matrix to rotation angles

I am extracting in OpenGL the Model Matrix with
glGetFloatv (GL_MODELVIEW_MATRIX, (float*)x)
And would like to extract from the resulting 4x4 matrix the x,y and z axis rotations. How Can I do that ?
Thanks !
First you should know, that x,y,z axis rotations, called Euler Angles suffer from serious numerical problems. Also they're not unambigous. So either you store a rotation angle and the rotation axis, thus effectively forming a quaternion in disguise, or you stick with the full rotation matrix.
Find the quaternion from a rotation matrix is called an eigenvalue problem. Technically you're determining the eigenvector of the rotation matrix, which is the axis and the magnitude designates the angle.
I'm writing a CAD-like app, so I understand your problem, we 'in the business' know how awful Euler angles are for linear transformations - but the end-user finds them far more intuitive than matrices or quaternions.
For my app I interpreted Ken Shoemake's wonderful algorithm, it's one of the very few that support arbitrary rotation orders. It's from '93, so it's in pure C code - not for the faint hearted!
http://tog.acm.org/resources/GraphicsGems/gemsiv/euler_angle/
Something like this should give you what you're after.
final double roll = Math.atan2(2 * (quat.getW() * quat.getX() + quat.getY() * quat.getZ()),
1 - 2 * (quat.getX() * quat.getX() + quat.getY() * quat.getY()));
final double pitch = Math.asin(2 * (quat.getW() * quat.getY() - quat.getZ() * quat.getY()));
final double yaw = Math.atan2(2 * (quat.getW() * quat.getZ() + quat.getX() * quat.getY()), 1 - 2 * (quat.getY()
* quat.getY() + quat.getZ() * quat.getZ()));
I use this as a utility function to print out camera angles when I'm using SLERP to interpolate between 2 quaternions that I've derived from 2 4x4 matrices (i.e. camera movement between 2 3D points).

Scaling vectors from a center point?

I'm trying to figure out if I have points that make for example a square:
* *
* *
and let's say I know the center of this square.
I want a formula that will make it for eample twice its size but from the center
* *
* *
* *
* *
Therefore the new shape is twice as large and from the center of the polygon. It has to work for any shape not just squares.
I'm looking more for the theory behind it more than the implementation.
If you know the center point cp and a point v in the polygon you would like to scale by scale, then:
v2 = v - cp; // get a vector to v relative to the centerpoint
v2_scaled = v2 * scale; // scale the cp-relative-vector
v1_scaled = v2_scaled + cp; // translate the scaled vector back
This translate-scale-translate pattern can be performed on vectors of any dimension.
If you want the shape twice as large, scale the distance of the coordinates to be sqrt(2) times further from the center.
In other words, let's say your point is at (x, y) and the center is (xcent, ycent). Your new point should be at
(xcent + sqrt(2)*(x - xcent), ycent + sqrt(2)*(y - ycent))
This will scale the distances from the new 'origin', (xcent, ycent) in such a way that the area doubles. (Because sqrt(2)*sqrt(2) == 2).
I'm not sure there's a clean way to do this for all types of objects. For relatively simple ones, you should be able to find the "center" as the average of all the X and Y values of the individual points. To double the size, you find the length and angle of a vector from the center to the point. Double the length of the vector, and retain the same angle to get the new point.
Edit: of course, "twice the size" is open to several interpretations (e.g., doubling the perimeter vs. doubling the area) These would change the multiplier used above, but the basic algorithm would remain essentially the same.
To do what you want you need to perform three operations: translate the square so that its centroid coincides with the origin of the coordinate system, scale the resulting square, translate it back.

Resources