Given three angular velocities vx, vy, vz about the x, y and z axes, measured in radians per second, as derived from an IMU's rate gyro, how do I produce an equivalent quaternion for the entire rotation between one sample and the next, i.e. the integral of rotation over time dt between the current sample and the previous sample?
The primary issue is that these three angular velocities are measured independently of each other, and yet rotations are not commutative. This means the order in which the angular velocities are applied during the integration would affect the computed quaternion, just as converting Euler angles to a quaternion produces a different quaternion depending on the order in which the Euler rotations are applied (e.g. x, then y, then z, vs. some other order).
I think the right thing to do is to split the timestep dt into a number of shorter time period samples, e.g. say N=10, then divide each velocity by that number, giving vx' = vx/N, vy' = vy/N, vz' = vz/N, and then applying the rotations N times in round robin fashion, in largest to smallest order, calculating the actual rotation over the interval dt/N in each case, and accumulating this into the final rotation quaternion.
I see a lot of references to quaternion derivatives when related questions are asked though, and I wonder if it might be possible to convert the angular velocities (which are derivatives of Euler angles) directly to a quaternion derivative (again though probably suffering from axis ordering sensitivity), then somehow integrate the quaternion derivative to convert back to a quaternion spanning time dt.
Seems like there should be a "right" way to do this, since every IMU that uses a rate gyro has to solve this problem. Any insights into this would be greatly appreciated!
I found the answer in this excellent post by Ashwin Narayan.
Update (1): the rowan library implements the necessary quaternion exponentiation in Python.
Update (2): User harold pointed to this answer, which shows the same quaternion exponentiation in C++ code, which is more legible than the NumPy code in rowan.
Related
Is the result of combining two quaternion rotations the same as that of two matrices and then converting that into a quaternion?
I have a quaternion (q1) and rotation matrix (m2) as input for a function (unfortunately non-negotiable) and would like to rotate the initial quaternion by the matrix resulting in a new quaternion. I have tried a fair few ways of doing this and have slightly bizarre results.
If I convert q1 into a matrix (m1), calculate m2.m1 and convert the result into a quaternion I get what is a likely quaternion result. However if I convert m2 into a quaternion using the exact same function and multiply those together (in both orders, I know it's non-commutative) I get something entirely different. I would like to realise the quaternion combination so that I can eventually SLERP from the current quaternion to the result.
All functions have come from here: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm and are being implemented in c++ and mathematica to test
There is an exact correspondence between 3x3 rotation matrices and unit quarternions, up to a sign change in the quarternion (the sign is irrelevant when in comes to performing rotation on 3D vectors).
This means that given two quarternions, q1, q2, and their corresponding matrices, m1, m2, the action of the quarternions on a vector v is the same as the action of the matrices on v:
q2*(q1*v*(q1^-1))*(q2^-1) = m2*m1*v
If your program does not achieve this result with an arbitrary vector v, there is likely an error in your formula somewhere.
I have multiple estimates for a transformation matrix, from mapping two point clouds to each other via ICP (Iterative Closest Point).
How can I generate the average transformation matrix for all these matrices?
Each matrix consists of a rigid translation and a rotation only, no scale or skew.
Ideally I would also like to calculate a weighted average, but an unweighted one is fine for now.
Averaging the translation vectors is of course trivial, but the rotations are problematic. One approach I found is averaging the individual base vectors for the rotations, but I am not sure that will result in a new orthonormal base, and the approach seems a little ad-hoc.
Splitting the transformation in translation and rotation is a good start. Averaging the translation is trivial.
Averaging the rotation is not that easy. Most approaches will use quaternions. So you need to transform the rotation matrix to a quaternion.
The easiest way to approximate the average is a linear blending, followed by renormalization of the quaternion:
q* = w1 * q1 + w2 * q2 + ... + w2 * qn
normalize q*
However, this is only an approximation. The reason for that is that the combination of two rotations is not performed by adding the quaternions, but by multiplying them. If we convert quaternions to a logarithmic space, we can use a simple linear blend (because multiplication will become additions). Then transform the quaternion back to the original space. This is the idea of the Spherical Average (Buss 2001). If you're lucky, you find a library that supports log and exp of quaternions:
start with q* as above
do until convergence
for each input quaternion i (index)
diff = q[i] * inverse(q*)
u[i] = log(diff, base q*)
//Now perform the linear blend
adapt := zero quaternion
weights := 0
for each input quaternion i
adapt += weight[i] * u[i]
weights += weight[i]
adapt *= 1/weights
adaptInOriginalSpace = q* ^ adapt (^ is the power operator)
q* = adaptInOriginalSpace * q*
You can define a threshold for adaptInOriginalSpace. If it is a very very small rotation, you can break the loop. This algorithm is proven to preserve geodesic distances on a sphere.
http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation and http://en.wikipedia.org/wiki/Rotation_matrix#Quaternion will give you some elegant mathematics and a way to turn a rotation matrix into an angle of rotation round an axis of rotation. There will be two possible representations of each rotation, with different signs for both angle of rotation and axis of rotation.
You could convert everything and normalize them to have +ve angles of rotation, then work out the average angle of rotation and the average axis of rotation, renormalising this into a unit vector.
OTOH if your intention is to work out the most accurate possible estimate of the transformation, you need to write down some measure of the goodness of fit of any candidate transformation - a sum of squared errors is often mathematically convenient - and then solve an optimization problem to work out which transformation minimizes the sum of squared errors. This is at least easier to justify than taking an average of individually error-prone estimates, and may well be more accurate.
If you have an existing lerp method, then there is a trivial solution:
count = 1
average_transform = Matrix.Identity(4)
for new_transform in list_of_matrices:
factor = 1/count
average_transform = lerp(average_transform, new_transform, factor)
count += 1
This is only useful because lots more mathermatics packages have the ability to lerp matrices than to average lots of them.
Because I haven't come across this method elsewhere, here's an informal proof:
If there is one matrix, use just that matrix (factor will equal 1 for first matrix)
If there are two matrices, we need 50% of the second one (second factor is 50% so we lerp to half way between the existing first one and the new one)
If there are three matrices we need 33% of each, or 66% of the average of the first two and 33% of the third. The lerp factor of 0.3333 makes this happen.
And so on.
I haven't tested extensively with matrices, but I've used this successfully as a rolling average for other datatypes.
The singular value decomposition (SVD) can be used here.
Take the SVD of the sum of the rotation matricies, and then the average rotation matrix is simply given by Ravg = UV'.
"sdfgeoff" I can't comment in your answer because I'm new here, but you are the most correct, I think. Beutifull and elegant solution, by the way. Would be perfect if you use Spherical Linear Interpolation (SLERP) with quaternions, instead of Linear Interpolation (LERP) because quaternions that map rotations (quaternions with norm 1) define a sphere in 4D, and interpolating between then is in fact interpolate between two point in a sphere surface.
With my experience from point cloud registration, I wuold like to say that this will not work. ICP don't return random rotations in the likehood of the correct rotation. You need to use a beter algorith to register you point clouds (Global Registration algorithms, like FPFH, 4PCS, K4PCS, BSC, FGR, etc). Or a better initial guess for the transformation. ICP will only give you totally wrong rotations (when stuck in local minima) or almost perfect rotations, when initialized with good initial transformations.
Conclusion: averaging it will not work.
I would suggest taking a look at "Average" of multiple quaternions? for a more elaborate discussion on how to compute the average of rotations.
I have a quaternion which holds the rotation of an object. During the frame I modify it and obtain a new quaternion. I can calculate a quaternion that rotates from 'previous frame' to 'current frame'.
I cannot figure out, however, how to 'divide by t' this quaternion to get the rotation-per-second that I need.
I.e, based on the timestep, I need to know what the quaternion would look like had it been applied to itself an X amount of times (meaning, 28.5 times at 28.5 fps, etcetera).
Would anybody know how to do this? Or would you advise me to do something akin converting to Euler, multiplying, and then converting back?
Since combining rotations is equivalent to quaternion multiplication, repeating a rotation X times is equivalent to exponentiation: pow(q,X)=pow(q,1/t), or exp(ln(q)*X)=exp(ln(q)/t). See how to calculate these here.
Tell me if I am wrong.
I'm starting using quaternions. Using a rotation matrix 4 x 4 (as used in OpenGL), I can compute model view matrix multiplying the current model view with a rotation matrix. The rotation matrix is derived from the quaternion.
The quaternion is a direction vector (even not normalized) and a rotation angle. Resulted rotation is dependent on the direction vector module and the w quaternion component.
But why I should use quaternions instead of Euler axis/angle notation? The latter is simpler to visualize and to manage...
All information that I found could be synthetized with this beatifull article:
http://en.wikipedia.org/wiki/Rotation_representation
Why it is better to use quaternions is explained in the article.
More compact than the DCM representation and less susceptible to round-off errors
The quaternion elements vary continuously over the unit sphere in R4, (denoted by S3) as the orientation changes, avoiding discontinuous jumps (inherent to three-dimensional parameterizations), this is often referred to as gimbal lock.
Expression of the DCM in terms of quaternion parameters involves no trigonometric functions
It is simple to combine two individual rotations represented as quaternions using a quaternion product
Unlike Euler angles, quaternions don't suffer from gimbal lock.
I disagree that quaternions are easier to visualize, but the main reason for using them is that it's easy to concatenate rotations without "matrix creep".
Quaternions are generally used for calculative simplicity - it's a lot easier (and faster) to do things like composing transformations when using quaternions. To quote the Wikipedia page you linked,
Combining two successive rotations,
each represented by an Euler axis and
angle, is not straightforward, and in
fact does not satisfy the law of
vector addition, which shows that
finite rotations are not really
vectors at all. It is best to employ
the direction cosine matrix (DCM), or
tensor, or quaternion notation,
calculate the product, and then
convert back to Euler axis and angle.
They also do not suffer from a problem common to axis/angle form, gimbal lock.
Quaternions are easier to visualize, manage and create in scenarios where you want to rotate about a particular axis that can be easily calculated. Determining a single rotation angle is much easier than decomposing a rotation into multiple angles.
Corrections to the OP: the vector represents the axis of rotation, not a direction, and the rotation component is the cosine of the half-angle, not the angle itself.
As mentioned, quaternions don't suffer from gimble lock.
For a given rotation, there is exactly one normalized quaternion representation.
There can be several seemingly unrelated axis/angle values that result in the same rotation.
Quaternion rotations can be easily combined.
It is extraordinarily complex to calculate an axis/angle notation that is the cumulation of two other axis/angle rotations.
Floating point numbers have a higher degree of accuracy when representing values between 0.0 and 1.0.
The short answer is that axis/angle notation can initially seem like the most reasonable representation, but in practice quaternions alleviate many problems that axis/angle notation presents.
Finding the angle between two vectors is not hard using the cosine rule. However, because I am programming for a platform with very limited resources, I would like to avoid calculations such as sqrt and arccos. Even simple divisions should be limited as much as possible.
Fortunately, I do not need the angle per se, but only need some value that is proportional to said angle.
So I am looking for some computationally cheap algorithm to calculate a quantity that is related to the angle between two vectors. So far, I haven't found something that fits the bill, nor have I been able to come up with something myself.
If you don't need the actual euclidean angle, but something that you can use as a base for angle comparisons, then changing to taxicab geometry may be a choice, because you can drop trigonometry and it's slowness while MAINTAINING THE ACCURACY (or at least with really minor loosing of accuracy, see below).
In main modern browser engines the speedup factor is between 1.44 - 15.2 and the accuracy is nearly the same as in atan2. Calculating diamond angle is average 5.01 times faster than atan2 and using inline code in Firefox 18 the speedup reaches factor 15.2. Speed comparison: http://jsperf.com/diamond-angle-vs-atan2/2.
The code is very simple:
function DiamondAngle(y, x)
{
if (y >= 0)
return (x >= 0 ? y/(x+y) : 1-x/(-x+y));
else
return (x < 0 ? 2-y/(-x-y) : 3+x/(x-y));
}
The above code gives you angle between 0 and 4, while atan2 gives you angle between -PI and PI as the following table shows:
Note that diamond angle is always positive and in the range of 0-4, while atan2 gives also negative radians. So diamond angle is sort of more normalized. And another note is that atan2 gives a little more precise result, because the range length is 2*pi (ie 6.283185307179586), while in diamond angles it is 4. In practice this is not very significant, eg. rad 2.3000000000000001 and 2.3000000000000002 are both in diamond angles 1.4718731421442295, but if we lower the precision by dropping one zero, rad 2.300000000000001 and 2.300000000000002 gives both different diamond angle. This "precision loosing" in diamond angles is so small, that it has some significant influence only if the distances are huge. You can play with conversions in http://jsbin.com/bewodonase/1/edit?output (Old version: http://jsbin.com/idoyon/1):
The above code is enough for fast angle comparisons, but in many cases there is need to convert diamond angle to radians and vice verca. If you eg. have some tolerance as radian angles, and then you have a 100,000 times loop where this tolerance is compared to other angles, it's not wise to make comparisons using atan2. Instead, before looping, you change the radian tolerance to taxicab (diamond angles) tolerance and make in-loop comparisons using diamond tolerance and this way you don't have to use slow trigonometric functions in speed-critical parts of the code ( = in loops).
The code that makes this conversion is this:
function RadiansToDiamondAngle(rad)
{
var P = {"x": Math.cos(rad), "y": Math.sin(rad) };
return DiamondAngle(P.y, P.x);
}
As you notice there is cos and sin. As you know, they are slow, but you don't have to make the conversion in-loop, but before looping and the speedup is huge.
And if for some reason, you have to convert diamond angle to radians, eg. after looping and making angle comparisons to return eg. the minimum angle of comparisons or whatever as radians, the code is as follows:
function DiamondAngleToRadians(dia)
{
var P = DiamondAngleToPoint(dia);
return Math.atan2(P.y,P.x);
}
function DiamondAngleToPoint(dia)
{
return {"x": (dia < 2 ? 1-dia : dia-3),
"y": (dia < 3 ? ((dia > 1) ? 2-dia : dia) : dia-4)};
}
Here you use atan2, which is slow, but idea is to use this outside any loops. You cannot convert diamond angle to radians by simply multiplying by some factor, but instead finding a point in taxicab geometry of which diamond angle between that point and the positive X axis is the diamond angle in question and converting this point to radians using atan2.
This should be enough for fast angle comparisons.
Of course there is other atan2 speedup techniques (eg. CORDIC and lookup tables), but AFAIK they all loose accuracy and still may be slower than atan2.
BACKGROUND: I have tested several techniques: dot products, inner products, law of cosine, unit circles, lookup tables etc. but nothing was sufficient in case where both speed and accuracy are important. Finally I found a page in http://www.freesteel.co.uk/wpblog/2009/06/encoding-2d-angles-without-trigonometry/ which had the desired functions and principles.
I assumed first that also taxicab distances could be used for accurate and fast distance comparisons, because the bigger distance in euclidean is bigger also in taxicab. I realized that contrary to euclidean distances, the angle between start and end point has affect to the taxicab distance. Only lengths of vertical and horizontal vectors can be converted easily and fast between euclidean and taxicab, but in every other case you have to take the angle into account and then the process is too slow (?).
So as a conclusion I am thinking that in speed critical applications where is a loop or recursion of several comparisons of angles and/or distances, angles are faster to compare in taxicab space and distances in euclidean (squared, without using sqrt) space.
Have you tried a CORDIC algorithm? It's a general framework for solving polar ↔ rectangular problems with only add/subtract/bitshift + table, essentially doing rotation by angles of the form tan-1 (2-n). You can trade off accuracy with execution time by altering the number of iterations.
In your case, take one vector as a fixed reference, and copy the other to a temporary vector, which you rotate using the cordic angles towards the first vector (roughly bisection) until you reach a desired angular accuracy.
(edit: use sign of dot product to determine at each step whether to rotate forward or backward. Although if multiplies are cheap enough to allow using dot product, then don't bother with CORDIC, perhaps use a table of sin/cos pairs for rotation matrices of angles π/2n to solve the problem with bisection.)
(edit: I like Eric Bainville's suggestion in the comments: rotate both vectors towards zero and keep track of the angle difference.)
Back in the day of a few K of RAM and machines with limited mathematical capabilities I used lookup tables and linear interpolation. The basic idea is simple: create an array with as much resolution as you need (more elements reduce the error created by interpolation). Then interpolate between lookup values.
Here is an example in processing (original dead link).
You can do this with your other trig functions as well. On the 6502 processor this allowed for full 3D wire frame graphics to be computed with an order of magnitude speed increase.
Here on SO I still don't have the privilege to comment (though I have at math.se) so this is actually a reply to Timo's post on diamond angles.
The whole concept of diamond angles based on the L1 norm is most interesting and if it were merely a comparison of which vector has a greater/lesser w.r.t. the positive X axis it would be sufficient. However, the OP did mention angle between two generic vectors, and I presume the OP wants to compare it to some tolerance for finding smoothness/corner status or sth like that, but unfortunately, it seems that with only the formulae provided on jsperf.com or freesteel.co.uk (links above) it seems it is not possible to do this using diamond angles.
Observe the following output from my Asymptote implementation of the formulae:
Vectors : 50,20 and -40,40
Angle diff found by acos : 113.199
Diff of angles found by atan2 : 113.199
Diamond minus diamond : 1.21429
Convert that to degrees : 105.255
Rotate same vectors by 30 deg.
Vectors : 33.3013,42.3205 and -54.641,14.641
Angle diff found by acos : 113.199
Diff of angles found by atan2 : 113.199
Diamond minus diamond : 1.22904
Convert that to degrees : 106.546
Rotate same vectors by 30 deg.
Vectors : 7.67949,53.3013 and -54.641,-14.641
Angle diff found by acos : 113.199
Diff of angles found by atan2 : 113.199
Diamond minus diamond : 1.33726
Convert that to degrees : 116.971
So the point is you can't do diamond(alpha)-diamond(beta) and compare it to some tolerance unlike you can do with the output of atan2. If all you want to do is diamond(alpha)>diamond(beta) then I suppose diamond is fine.
The cross product is proportional to the angle between two vectors, and when the vectors are normalized and the angle is small the cross product is very close to the actual angle in radians due to the small angle approximation.
specifically:
I1Q2-I2Q1 is proportional to the angle between I1Q1 and I2Q2.
The solution would be trivial if the vectors were defined/stored using polar coordinates instead of cartesian coordinates (or, 'as well as' using cartesian coordinates).
dot product of two vectors (x1, y1) and (x2, y2) is
x1 * x2 + y1 * y2
and is equivilent to the product of the lengths of the two vectors times the cosine of the angle between them.
So if you normalize the two vectors first (divide the coordinates by the length)
Where length of V1 L1 = sqrt(x1^2 + y1^2),
and length of V2 L2 = sqrt(x2^2 + y2^2),
Then normalized vectors are
(x1/L1, y1/L1), and (x2/L2, y2/L2),
And dot product of normalized vectors (which is the same as the cosine of angle between the vectors) would be
(x1*x2 + y1*y2)
-----------------
(L1*L2)
of course this may be just as computationally difficult as calculating the cosine
if you need to compute the square root, then consider using the invsqrt hack.
acos((x1*x2 + y1*y2) * invsqrt((x1*x1+y1*y1)*(x2*x2+y2*y2)));
The dot product might work in your case. It's not proportional to the angle, but "related".