Calculating quaternion for transformation between 2 3D cartesian coordinate systems - transformation

I have two cartesian coordinate systems with known unit vectors:
System A(x_A,y_A,z_A)
and
System B(x_B,y_B,z_B)
Both systems share the same origin (0,0,0). I'm trying to calculate a quaternion, so that vectors in system B can be expressed in system A.
I am familiar with the mathematical concept of quaternions. I have already implemented the required math from here: http://content.gpwiki.org/index.php/OpenGL%3aTutorials%3aUsing_Quaternions_to_represent_rotation
One possible solution could be to calculate Euler angles and use them for 3 quaternions. Multiplying them would lead to a final one, so that I could transform my vectors:
v(A) = q*v(B)*q_conj
But this would incorporate Gimbal Lock again, which was the reason NOT to use Euler angles in the beginning.
Any idead how to solve this?

You can calculate the quaternion representing the best possible transformation from one coordinate system to another by the method described in this paper:
Paul J. Besl and Neil D. McKay
"Method for registration of 3-D shapes", Sensor Fusion IV: Control Paradigms and Data Structures, 586 (April 30, 1992); http://dx.doi.org/10.1117/12.57955
The paper is not open access but I can show you the Python implementation:
def get_quaternion(lst1,lst2,matchlist=None):
if not matchlist:
matchlist=range(len(lst1))
M=np.matrix([[0,0,0],[0,0,0],[0,0,0]])
for i,coord1 in enumerate(lst1):
x=np.matrix(np.outer(coord1,lst2[matchlist[i]]))
M=M+x
N11=float(M[0][:,0]+M[1][:,1]+M[2][:,2])
N22=float(M[0][:,0]-M[1][:,1]-M[2][:,2])
N33=float(-M[0][:,0]+M[1][:,1]-M[2][:,2])
N44=float(-M[0][:,0]-M[1][:,1]+M[2][:,2])
N12=float(M[1][:,2]-M[2][:,1])
N13=float(M[2][:,0]-M[0][:,2])
N14=float(M[0][:,1]-M[1][:,0])
N21=float(N12)
N23=float(M[0][:,1]+M[1][:,0])
N24=float(M[2][:,0]+M[0][:,2])
N31=float(N13)
N32=float(N23)
N34=float(M[1][:,2]+M[2][:,1])
N41=float(N14)
N42=float(N24)
N43=float(N34)
N=np.matrix([[N11,N12,N13,N14],\
[N21,N22,N23,N24],\
[N31,N32,N33,N34],\
[N41,N42,N43,N44]])
values,vectors=np.linalg.eig(N)
w=list(values)
mw=max(w)
quat= vectors[:,w.index(mw)]
quat=np.array(quat).reshape(-1,).tolist()
return quat
This function returns the quaternion that you were looking for. The arguments lst1 and lst2 are lists of numpy.arrays where every array represents a 3D vector. If both lists are of length 3 (and contain orthogonal unit vectors), the quaternion should be the exact transformation. If you provide longer lists, you get the quaternion that is minimizing the difference between both point sets.
The optional matchlist argument is used to tell the function which point of lst2 should be transformed to which point in lst1. If no matchlist is provided, the function assumes that the first point in lst1 should match the first point in lst2 and so forth...
A similar function for sets of 3 Points in C++ is the following:
#include <Eigen/Dense>
#include <Eigen/Geometry>
using namespace Eigen;
/// Determine rotation quaternion from coordinate system 1 (vectors
/// x1, y1, z1) to coordinate system 2 (vectors x2, y2, z2)
Quaterniond QuaternionRot(Vector3d x1, Vector3d y1, Vector3d z1,
Vector3d x2, Vector3d y2, Vector3d z2) {
Matrix3d M = x1*x2.transpose() + y1*y2.transpose() + z1*z2.transpose();
Matrix4d N;
N << M(0,0)+M(1,1)+M(2,2) ,M(1,2)-M(2,1) , M(2,0)-M(0,2) , M(0,1)-M(1,0),
M(1,2)-M(2,1) ,M(0,0)-M(1,1)-M(2,2) , M(0,1)+M(1,0) , M(2,0)+M(0,2),
M(2,0)-M(0,2) ,M(0,1)+M(1,0) ,-M(0,0)+M(1,1)-M(2,2) , M(1,2)+M(2,1),
M(0,1)-M(1,0) ,M(2,0)+M(0,2) , M(1,2)+M(2,1) ,-M(0,0)-M(1,1)+M(2,2);
EigenSolver<Matrix4d> N_es(N);
Vector4d::Index maxIndex;
N_es.eigenvalues().real().maxCoeff(&maxIndex);
Vector4d ev_max = N_es.eigenvectors().col(maxIndex).real();
Quaterniond quat(ev_max(0), ev_max(1), ev_max(2), ev_max(3));
quat.normalize();
return quat;
}

What language are you using? If c++, feel free to use my open source library:
http://sourceforge.net/p/transengine/code/HEAD/tree/transQuaternion/
The short of it is, you'll need to convert your vectors to quaternions, do your calculations, and then convert your quaternion to a transformation matrix.
Here's a code snippet:
Quaternion from vector:
cQuat nTrans::quatFromVec( Vec vec ) {
float angle = vec.v[3];
float s_angle = sin( angle / 2);
float c_angle = cos( angle / 2);
return (cQuat( c_angle, vec.v[0]*s_angle, vec.v[1]*s_angle,
vec.v[2]*s_angle )).normalized();
}
And for the matrix from quaternion:
Matrix nTrans::matFromQuat( cQuat q ) {
Matrix t;
q = q.normalized();
t.M[0][0] = ( 1 - (2*q.y*q.y + 2*q.z*q.z) );
t.M[0][1] = ( 2*q.x*q.y + 2*q.w*q.z);
t.M[0][2] = ( 2*q.x*q.z - 2*q.w*q.y);
t.M[0][3] = 0;
t.M[1][0] = ( 2*q.x*q.y - 2*q.w*q.z);
t.M[1][1] = ( 1 - (2*q.x*q.x + 2*q.z*q.z) );
t.M[1][2] = ( 2*q.y*q.z + 2*q.w*q.x);
t.M[1][3] = 0;
t.M[2][0] = ( 2*q.x*q.z + 2*q.w*q.y);
t.M[2][1] = ( 2*q.y*q.z - 2*q.w*q.x);
t.M[2][2] = ( 1 - (2*q.x*q.x + 2*q.y*q.y) );
t.M[2][3] = 0;
t.M[3][0] = 0;
t.M[3][1] = 0;
t.M[3][2] = 0;
t.M[3][3] = 1;
return t;
}

I just ran into this same problem. I was on the track to a solution, but I got stuck.
So, you'll need TWO vectors which are known in both coordinate systems. In my case, I have 2 orthonormal vectors in the coordinate system of a device (gravity and magnetic field), and I want to find the quaternion to rotate from device coordinates to global orientation (where North is positive Y, and "up" is positive Z). So, in my case, I've measured the vectors in the device coordinate space, and I'm defining the vectors themselves to form the orthonormal basis for the global system.
With that said, consider the axis-angle interpretation of quaternions, there is some vector V about which the device's coordinates can be rotated by some angle to match the global coordinates. I'll call my (negative) gravity vector G, and magnetic field M (both are normalized).
V, G and M all describe points on the unit sphere.
So do Z_dev and Y_dev (the Z and Y bases for my device's coordinate system).
The goal is to find a rotation which maps G onto Z_dev and M onto Y_dev.
For V to rotate G onto Z_dev the distance between the points defined by G and V must be the same as the distance between the points defined by V and Z_dev. In equations:
|V - G| = |V - Z_dev|
The solution to this equation forms a plane (all points equidistant to G and Z_dev). But, V is constrained to be unit-length, which means the solution is a ring centered on the origin -- still an infinite number of points.
But, the same situation is true of Y_dev, M and V:
|V - M| = |V - Y_dev|
The solution to this is also a ring centered on the origin. These rings have two intersection points, where one is the negative of the other. Either is a valid axis of rotation (the angle of rotation will just be negative in one case).
Using the two equations above, and the fact that each of these vectors is unit length you should be able to solve for V.
Then you just have to find the angle to rotate by, which you should be able to do using the vectors going from V to your corresponding bases (G and Z_dev for me).
Ultimately, I got gummed up towards the end of the algebra in solving for V.. but either way, I think everything you need is here -- maybe you'll have better luck than I did.

Define 3x3 matrices A and B as you gave them, so the columns of A are x_A,x_B, and x_C and the columns of B are similarly defined. Then the transformation T taking coordinate system A to B is the solution TA = B, so T = BA^{-1}. From the rotation matrix T of the transformation you can calculate the quaternion using standard methods.

You need to express the orientation of B, with respect to A as a quaternion Q. Then any vector in B can be transformed to a vector in A e.g. by using a rotation matrix R derived from Q. vectorInA = R*vectorInB.
There is a demo script for doing this (including a nice visualization) in the Matlab/Octave library available on this site: http://simonbox.info/index.php/blog/86-rocket-news/92-quaternions-to-model-rotations

You can compute what you want using only quaternion algebra.
Given two unit vectors v1 and v2 you can directly embed them into quaternion algebra and get the corresponding pure quaternions q1 and q2. The rotation quaternion Q that align the two vectors such that:
Q q1 Q* = q2
is given by:
Q = q1 (q1 + q2)/(||q1 + q2||)
The above product is the quaternion product.

Related

Algorithm to find a vector parallel to a plane and perpendicular to another vector

I'm trying to derive a formula to extract vector u.
I'm given some initial data:
Plane F with the method to extract its normal n = F->normal().
vector c that does not lie within the plane F and passes through some point E that also does not lie within the plane F.
And some constrains to use:
The desired vector u is perpendicular to the vector c.
Vector u is also perpendicular to some vector r which is not given. The vector r is parallel to the plane F and also perpendicular to the vector c. Therefore, we can say the vectors c, r and u are orthogonal.
Let's denote * as dot product, and ^ operator is cross product between two 3d vectors.
The calculation of the vector u is easy by using cross product: vec3 u = c^r. So, my whole task is narrowed down to how to find the vector r which is parallel to a given plane F and at the same time perpendicular to the given vector c.
Because we know that r is parallel to F, we can use plane's normal and dot product: n*r = 0. Since r is unknown, an infinite number of lines can satisfy the aforementioned equation. So, we can also use the condition that r is perpendicular to c: r*c = 0.
To summarize, there are two dot-product equations that should help us to find the vector r:
r*c = 0;
r*n = 0;
However, I am having hard time trying to figure out how to obtain the vector r coordinates provided the two equations, in algorithmic way. Assuming r = (x, y, z) and we want to find x, y and z; it does not seem possible from only two equations:
x*c.x + y*c.y + z*c.z = 0;
x*n.x + y*n.y + z*n.z = 0;
I feel like I'm missing something, e.g., I need a third constrain. Is there anything else needed to extract x, y and z? Or do I have a flaw in my logic?
You can find the vector r by computing the cross product of n and c.
This will be automatically satisfy r.c=r.n=0
You are right that your two equations will have multiple solutions. The other solutions are given by any scalar multiple of r.

Convert a bivariate draw in a univariate draw in Matlab

I have in mind the following experiment to run in Matlab and I am asking for an help to implement step (3). Any suggestion would be very appreciated.
(1) Consider the random variables X and Y both uniformly distributed on [0,1]
(2) Draw N realisation from the joint distribution of X and Y assuming that X and Y are independent (meaning that X and Y are uniformly jointly distributed on [0,1]x[0,1]). Each draw will be in [0,1]x[0,1].
(3) Transform each draw in [0,1]x[0,1] in a draw in [0,1] using the Hilbert space filling curve: under the Hilbert curve mapping, the draw in [0,1]x[0,1] should be the image of one (or more because of surjectivity) point(s) in [0,1]. I want pick one of these points. Is there any pre-built package in Matlab doing this?
I found this answer which I don't think does what I want as it explains how to obtain the Hilbert value of the draw (curve length from the start of curve to the picked point)
On wikipedia I found this code in C language (from (x,y) to d) which, again, does not fulfil my question.
EDIT This answer does not address updated version of the question, which explicitly asks about constructing Hilbert curve. Instead, this answer addresses a related question on construction of bijective mapping, and the relation to uniform distribution.
Your problem in not really well defined. If you only need the resulting distribution to be uniform, nothing is stopping you from simply picking f:(X,Y)->X. Result would be uniform regardless of whether X and Y are correlated. From your post I can only presume that what you want, in fact, is for the resulting transformation to be bijective, or as close to it as possible given machine precision limitations.
Worth noting that unless you need the algorithm that is best in preserving locality (which is clearly not required for resulting distribution to be bijective, not to mention uniform), there's no need to bother constructing Hilbert curves that you mention in your question. They have just as much to do with the solution as any other space-filling curve, and are incredibly computationally intensive.
So assuming you're looking for a bijective mapping, your question is equivalent to asking whether the set of points in a [unit] square has the same cardinality as the set of points in a [unit] line segment, and if it is, how to construct that bijection, i.e. 1-to-1 correspondence. The intuition says the square should have a higher cardinality, and Cantor spent 3 years trying to prove that, eventually proving quite the opposite - that these sets are in fact equinumerous. He was so surprised at his discovery that he wrote:
I see it, but I don't believe it!
The most commonly referred to bijection, fulfilling** this criteria, is the following. Represent x and y in their decimal form, i.e. x=0. x1 x2 x3 x4 x5..., and y=0. y1 y2 y3 y4 y5..., and let f:(X,Y)->Z be z=0. x1 y1 x2 y2 x3 y3 x4 y4 x5 y5..., i.e. alternating the decimals of the two numbers. The idea behind the bijection is trivial, though a rigorous proof requires quite a bit of prior knowledge.
** The caveat is that if we take e.g. x = 1/3 = 0.33333... and y = 1/5 = 0.199999... = 0.200000..., we can see there are two sequences corresponding to them: z = 0.313939393939... and z = 0.323030303030.... To overcome this obstacle we have to prove that adding a countable set to an uncountable one does not change the cardinality of the latter.
In reality we have to deal with machine precision and not pure math, which strictly speaking means both sets are actually finite and hence not equinumerous (assuming you store result with the same precision as original numbers). Which means we're simply forced to do some assumptions and loose some information, such as, in this case, the last half of significant digits of x and y. That is, unless we use a different data type that allows to store result with a double precision, compared to that of original variables.
Finally, sample implementation in Matlab:
x = rand();
y = rand();
chars = [num2str(x, '%.17f'); num2str(y, '%.17f')];
z = str2double(['0.' reshape(chars(:,3:end), 1, [])]);
>> cellstr(['x=' num2str(x, '%.17f'); 'y=' num2str(y, '%.17f'); 'z=' num2str(z, '%.17f')])
ans =
'x=0.65549803980353738'
'y=0.10975505072305158'
'z=0.61505947958500362'
Edit This answers the original request for a transformation f(x,y) -> t ~ U[0,1] given x,y ~ U[0,1], and additionally for x and y correlated. The updated question asks specifically for a Hilbert curve, H(x,y) -> t ~ U[0,1] and only for x,y ~ U[0,1] so this answer is no longer relevant.
Consider a random uniform sequence in [0,1] r1, r2, r3, .... You are assigning this sequence to pairs of numbers (x1,y1), (x2,y2), .... What you are asking for is a transformation on pairs (x,y) which yield a uniform random number in [0,1].
Consider the random subsequence r1, r3, ... corresponding to x1, x2, .... If you trust that your number generator is random and uncorrelated in [0,1], then the subsequence x1, x2, ... should also be random and uncorrelated in [0,1]. So the rather simple answer to the first part of your question is a projection onto either the x or y axis. That is, just pick x.
Next consider correlations between x and y. Since you haven't specified the nature of the correlation, let's assume a simple scaling of the axes,
such as x' => [0, 0.5], y' => [0, 3.0], followed by a rotation. The scaling doesn't introduce any correlation since x' and y' are still independent. You can generate it easily enough with a matrix multiply:
M1*p = [x_scale, 0; 0, y_scale] * [x; y]
for matrix M1 and point p. You can introduce a correlation by taking this stretched form and rotating it by theta:
M2*M1*p = [cos(theta), sin(theta); -sin(theta), cos(theta)]*M1*p
Putting it all together with theta = pi/4, or 45 degrees, you can see that larger values of y are correlated with larger values of x:
cos_t = sin_t = cos(pi/4); % at 45 degrees, sin(t) = cos(t) = 1/sqrt(2)
M2 = [cos_t, sin_t; -sin_t, cos_t];
M1 = [0.5, 0.0; 0.0, 3.0];
p = random(2,1000);
p_prime = M2*M1*p;
plot(p_prime(1)', p_prime(2)', '.');
axis('equal');
The resulting plot* shows a band of uniformly distributed numbers at a 45 degree angle:
Further transformations are possible with shear, and if you are clever about it, translation (OpenGL uses 4x4 transformation matrices so that translation can be represented as a linear transform matrix, with an extra dimension added before the transformation steps and removed before they are done).
Given a known affine correlation structure, you can transform back from random points (x',y') to points (x,y) where x and y are independent in [0,1] by solving Mk*...*M1 p = p_prime for p, or equivalently, by setting p = inv(Mk*...*M1) * p_prime, where p=[x;y]. Again, just pick x, which will be uniform in [0,1]. This doesn't work if the transformation matrix is singular, e.g., if you introduce a projection matrix Mj into the mix (though if the projection is the first step you can still recover).
* You may notice that the plot is from python rather than matlab. I don't have matlab or octave sitting in front of me right now, so I hope I got the syntax details right.
You could compute the hilbert curve from f(x,y)=z. Basically it's a hamiltonian path traversal. You can find a good description at Nick's spatial index hilbert curve quadtree blog. Or take a look at monotonic n-ary gray code. I've written an implementation based on Nick's blog in php:http://monstercurves.codeplex.com.
I will focus only on your last point
(3) Transform each draw in [0,1]x[0,1] in a draw in [0,1] using the Hilbert space filling curve: under the Hilbert curve mapping, the draw in [0,1]x[0,1] should be the image of one (or more because of surjectivity) point(s) in [0,1]. I want pick one of these points. Is there any pre-built package in Matlab doing this?
As far as I know, there aren't pre-built packages in Matlab doing this, but the good news is that the code on wikipedia can be called from MATLAB, and it is as simple as putting together the conversion routine with a gateway function in a xy2d.c file:
#include "mex.h"
// source: https://en.wikipedia.org/wiki/Hilbert_curve
// rotate/flip a quadrant appropriately
void rot(int n, int *x, int *y, int rx, int ry) {
if (ry == 0) {
if (rx == 1) {
*x = n-1 - *x;
*y = n-1 - *y;
}
//Swap x and y
int t = *x;
*x = *y;
*y = t;
}
}
// convert (x,y) to d
int xy2d (int n, int x, int y) {
int rx, ry, s, d=0;
for (s=n/2; s>0; s/=2) {
rx = (x & s) > 0;
ry = (y & s) > 0;
d += s * s * ((3 * rx) ^ ry);
rot(s, &x, &y, rx, ry);
}
return d;
}
/* The gateway function */
void mexFunction( int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
int n; /* input scalar */
int x; /* input scalar */
int y; /* input scalar */
int *d; /* output scalar */
/* check for proper number of arguments */
if(nrhs!=3) {
mexErrMsgIdAndTxt("MyToolbox:arrayProduct:nrhs","Three inputs required.");
}
if(nlhs!=1) {
mexErrMsgIdAndTxt("MyToolbox:arrayProduct:nlhs","One output required.");
}
/* get the value of the scalar inputs */
n = mxGetScalar(prhs[0]);
x = mxGetScalar(prhs[1]);
y = mxGetScalar(prhs[2]);
/* create the output */
plhs[0] = mxCreateDoubleScalar(xy2d(n,x,y));
/* get a pointer to the output scalar */
d = mxGetPr(plhs[0]);
}
and compile it with mex('xy2d.c').
The above implementation
[...] assumes a square divided into n by n cells, for n a power of 2, with integer coordinates, with (0,0) in the lower left corner, (n-1,n-1) in the upper right corner.
In practice, a discretization step is required before applying the mapping. As in every discretization problem, it is crucial to choose the precision wisely. The snippet below puts everything together.
close all; clear; clc;
% number of random samples
NSAMPL = 100;
% unit square divided into n-by-n cells
% has to be a power of 2
n = 2^2;
% quantum
d = 1/n;
N = 0:d:1;
% generate random samples
x = rand(1,NSAMPL);
y = rand(1,NSAMPL);
% discretization
bX = floor(x/d);
bY = floor(y/d);
% 2d to 1d mapping
dd = zeros(1,NSAMPL);
for iid = 1:length(dd)
dd(iid) = xy2d(n, bX(iid), bY(iid));
end
figure;
hold on;
axis equal;
plot(x, y, '.');
plot(repmat([0;1], 1, length(N)), repmat(N, 2, 1), '-r');
plot(repmat(N, 2, 1), repmat([0;1], 1, length(N)), '-r');
figure;
plot(1:NSAMPL, dd);
xlabel('# of sample')

Compute z-Value (Distance to Camera) of Vertex with given Projection Matrix

I've created a 3D-Scene with Blender and computed the Projection Matrix P (Also have information about the Translation T- and Rotation-Matrix R).
Like I mentioned in the title I try to calculate the z-Value or depth to an Vertex (x,y,z) from my given camera C with these Matrices.
Example:
Vertex v = [1.4,1,2.3] and position of camera c = [0,-0.7,10]. The Result should be anything around 10-2.3 = 7.7. Thank you for your help!
Usually rotation matrix is applied before translation. So
transform = R * T
R is the rotation matrix (usually 4 rows and 4 columns)
T is the translation matrix (4 rows and 4 columns)
* is the matrix multiplication wich apply first T and then R
of course I'm assuming you already know how to perform matrix multiplication, I'm not providing anycode because it is not clear if you need python snippet or you are using the exported model somehwere else.
After that you apply the final projection matrix (I'm assuming your projection matrix is already multiplied by view matrix)
final = P * transform
P is the projection matrix ( 4 rows and 4 columns)
transform is your previously obtained (4 rows and 4 columns) matrix
the final matrix is the one that will transform every vector of your 3D model, again here you do a matrix multiplication (but in this case the second operand is a colum vector wich 4th element is 1)
transformedVertex = final * Vec4(originalVertex,1)
transformedVertex is a column vector ( 4x1 matrix)
final is the final matrix (4x4)
a vertex is onl 3 coordinates, so we add 1 to make it (4x1)
* is still matrix multiplication
once transformed the vertex Z value is the one that gets directly mapped into Z buffer and ence into a Depth value.
At this point there is one operation that is done "by convention" and is dividing Z by W to normalize it, then values outside range [0..1] are discarded (nearest than near clip plane or farest than far clip plane).
See also this question:
Why do I divide Z by W?
EDIT:
I may have misinterpreted your question, if you need distance between camera and a point it is simply
function computeDistance( cam, pos)
dx = cam.x-pos.x
dy = cam.y-pos.y
dz = cam.z-pos.z
distance = sqrt( dx*dx + dy*dy + dz*dz)
end function
example
cameraposition = 10,0,0
vertexposition = 2,0,0
the above code
computeDistance ( cameraposition, vertexposition)
outputs
8
Thanks for your help, here is what I was looking for:
Data setup
R rotation matrix 4x4
T translation matrix 4x4
v any vertex in with [x,y,z,1] 4x1
Result
vec4 vector 4x1 (x,y,z,w)
vec4 = R * T * v
The vec4.z value is the result I was looking for. Thanks!

Finding translation and scale on two sets of points to get least square error in their distance?

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github

Finding the spin of a sphere given X, Y, and Z vectors relative to sphere

I'm using Electro in Lua for some 3D simulations, and I'm running in to something of a mathematical/algorithmic/physics snag.
I'm trying to figure out how I would find the "spin" of a sphere of a sphere that is spinning on some axis. By "spin" I mean a vector along the axis that the sphere is spinning on with a magnitude relative to the speed at which it is spinning. The reason I need this information is to be able to slow down the spin of the sphere by applying reverse torque to the sphere until it stops spinning.
The only information I have access to is the X, Y, and Z unit vectors relative to the sphere. That is, each frame, I can call three different functions, each of which returns a unit vector pointing in the direction of the sphere model's local X, Y and Z axes, respectively. I can keep track of how each of these change by essentially keeping the "previous" value of each vector and comparing it to the "new" value each frame. The question, then, is how would I use this information to determine the sphere's spin? I'm stumped.
Any help would be great. Thanks!
My first answer was wrong. This is my edited answer.
Your unit vectors X,Y,Z can be put together to form a 3x3 matrix:
A = [[x1 y1 z1],
[x2 y2 z2],
[x3 y3 z3]]
Since X,Y,Z change with time, A also changes with time.
A is a rotation matrix!
After all, if you let i=(1,0,0) be the unit vector along the x-axis, then
A i = X so A rotates i into X. Similarly, it rotates the y-axis into Y and the
z-axis into Z.
A is called the direction cosine matrix (DCM).
So using the DCM to Euler axis formula
Compute
theta = arccos((A_11 + A_22 + A_33 - 1)/2)
theta is the Euler angle of rotation.
The magnitude of the angular velocity, |w|, equals
w = d(theta)/dt ~= (theta(t+dt)-theta(t)) / dt
The axis of rotation is given by e = (e1,e2,e3) where
e1 = (A_32 - A_23)/(2 sin(theta))
e2 = (A_13 - A_31)/(2 sin(theta))
e3 = (A_21 - A_12)/(2 sin(theta))
I applaud ~unutbu's, answer, but I think there's a simpler approach that will suffice for this problem.
Take the X unit vector at three successive frames, and compare them to get two deltas:
deltaX1 = X2 - X1
deltaX2 = X3 - X2
(These are vector equations. X1 is a vector, the X vector at time 1, not a number.)
Now take the cross-product of the deltas and you'll get a vector in the direction of the rotation vector.
Now for the magnitude. The angle between the two deltas is the angle swept out in one time interval, so use the dot product:
dx1 = deltaX1/|deltaX1|
dx2 = deltax2/|deltaX2|
costheta = dx1.dx2
theta = acos(costheta)
w = theta/dt
For the sake of precision you should choose the unit vector (X, Y or Z) that changes the most.

Resources