Algorithm to store particle position on a grid (Chaining mesh) - algorithm

I have a particle distribution, i.e. a set of 3D array x,y and z that give the positions of N particles. I divide my domain into cells and I would like to program an algorithm which gives me how many particles I have in a cell.
I am looking for something that doesn't use too much memory. If the distribution of particles were mono-dimensional a smart idea is to sort the particles with decreasing x.
In this way we only need to save, for every cell, the particle with smaller x within the cell. For example I know that the 7th particle is the particle with the smaller x that belong to cell i. Therefore, in cell i, we have to find particles 0 to 7.
My question is: how can I extend this to 3D? Or, how can I build a chaining mesh?

This is not a trivial problem. You might want to look at R-trees and indeed Spatial databases in general.

I think your problem can be solved much easier.
Make a 3D-array of 'cells'. Loop through your particles and increment value of a cell current particle belongs to.
Sample code:
cells = int[X][Y][Z]
for p in particles:
cx = cast_to_int((p.x / maxX) * X)
cy = cast_to_int((p.y / maxY) * Y)
cz = cast_to_int((p.z / maxZ) * Z)
cells[cx][cy][cz]++
UPD: works only if all cells have the same correspondent sizes (i.e. x1 = x2 = xn, y1 = y2 = yn...).

Related

What is the best way to check all pixels within certain radius?

I'm currently developing an application that will alert users of incoming rain. To do this I want to check certain area around user location for rainfall (different pixel colours for intensity on rainfall radar image). I would like the checked area to be a circle but I don't know how to do this efficiently.
Let's say I want to check radius of 50km. My current idea is to take subset of image with size 100kmx100km (user+50km west, user+50km east, user+50km north, user+50km south) and then check for each pixel in this subset if it's closer to user than 50km.
My question here is, is there a better solution that is used for this type of problems?
If the occurrence of the event you are searching for (rain or anything) is relatively rare, then there's nothing wrong with scanning a square or pixels and then, only after detecting rain in that square, checking whether that rain is within the desired 50km circle. Note that the key point here is that you don't need to check each pixel of the square for being inside the circle (that would be very inefficient), you have to search for your event (rain) first and only when you found it, check whether it falls into the 50km circle. To implement this efficiently you also have to develop some smart strategy for handling multi-pixel "stains" of rain on your image.
However, since you are scanning a raster image, you can easily implement the well-known Bresenham circle algorithm to find the starting and the ending point of the circle for each scan line. That way you can easily limit your scan to the desired 50km radius.
On the second thought, you don't even need the Bresenham algorithm for that. For each row of pixels in your square, calculate the points of intersection of that row with the 50km circle (using the usual schoolbook formula with square root), and then check all pixels that fall between these intersection points. Process all rows in the same fashion and you are done.
P.S. Unfortunately, the Wikipedia page I linked does not present Bresenham algorithm at all. It has code for Michener circle algorithm instead. Michener algorithm will also work for circle rasterization purposes, but it is less precise than Bresenham algorithm. If you care for precision, find a true Bresenham on somewhere. It is actually surprisingly diffcult to find on the net: most search hits erroneously present Michener as Bresenham.
There is, you can modify the midpoint circle algorithm to give you an array of for each y, the x coordinate where the circle starts (and ends, that's the same thing because of symmetry). This array is easy to compute, pseudocode below.
Then you can just iterate over exactly the right part, without checking anything.
Pseudo code:
data = new int[radius];
int f = 1 - radius, ddF_x = 1;
int ddF_y = -2 * radius;
int x = 0, y = radius;
while (x < y)
{
if (f >= 0)
{
y--;
ddF_y += 2; f += ddF_y;
}
x++;
ddF_x += 2; f += ddF_x;
data[radius - y] = x; data[radius - x] = y;
}
Maybe you can try something that will speed up your algorithm.
In brute force algorithm you will probably use equation:
(x-p)^2 + (y-q)^2 < r^2
(p,q) - center of the circle, user position
r - radius (50km)
If you want to find all pixels (x,y) that satisfy above condition and check them, your algorithm goes to O(n^2)
Instead of scanning all pixels in this circle I will check only only pixels that are on border of the circle.
In that case, you can use some more clever way to define circle.
x = p+r*cos(a)
y = q*r*sin(a)
a - angle measured in radians [0-2pi]
Now you can sample some angles, for example twenty of them, iterate and find all pairs (x,y) that are border for radius 50km. Now check are they on the rain zone and alert user.
For more safety I recommend you to use multiple radians (smaller than 50km), because your whole rain cloud can be inside circle, and your app will not recognize him. For example use 3 incircles (r = 5km, 15km, 30km) and do same thing. Efficiency of this algorithm only depends on number of angles and number of incircles.
Pseudocode will be:
checkRainDanger()
p,q <- position
radius[] <- array of radii
for c = 1 to length(radius)
a=0
while(a<2*pi)
x = p + radius[c]*cos(a)
y = q + radius[c]*sin(a)
if rainZone(x,y)
return true
else
a+=pi/10
end_while
end_for
return false //no danger
r2=r*r
for x in range(-r, +r):
max_y=sqrt(r2-x*x)
for y in range(-max_y, +max_y):
# x,y is in range - check for rain

Can I calculate a transformation matrix given a set of points?

I'm trying to deduct the 2D-transformation parameters from the result.
Given is a large number of samples in an unknown X-Y-coordinate system as well as their respective counterparts in WGS84 (longitude, latitude). Since the area is small, we can assume the target system to be flat, too.
Sadly I don't know which order of scale, rotate, translate was used, and I'm not even sure if there were 1 or 2 translations.
I tried to create a lengthy equation system, but that ended up too complex for me to handle. Basic geometry also failed me, as the order of transformations is unknown and I would have to check every possible combination order.
Is there a systematic approach to this problem?
Figuring out the scaling factor is easy, just choose any two points and find the distance between them in your X-Y space and your WGS84 space and the ratio of them is your scaling factor.
The rotations and translations is a little trickier, but not nearly as difficult when you learn that the result of applying any number of rotations or translations (in 2 dimensions only!) can be reduced to a single rotation about some unknown point by some unknown angle.
Suddenly you have N points to determine 3 unknowns, the axis of rotation (x and y coordinate) and the angle of rotation.
Calculating the rotation looks like this:
Pr = R*(Pxy - Paxis_xy) + Paxis_xy
Pr is your rotated point in X-Y space which then needs to be converted to WGS84 space (if the axes of your coordinate systems are different).
R is the familiar rotation matrix depending on your rotation angle.
Pxy is your unrotated point in X-Y space.
Paxis_xy is the axis of rotation in X-Y space.
To actually find the 3 unknowns, you need to un-scale your WGS84 points (or equivalently scale your X-Y points) by the scaling factor you found and shift your points so that the two coordinate systems have the same origin.
First, finding the angle of rotation: take two corresponding pairs of points P1, P1' and P2, P2' and write out
P1' = R(P1-A) + A
P2' = R(P2-A) + A
where I swapped A = Paxis_xy for brevity. Subtracting the two equations gives:
P2'-P1' = R(P2-P1)
B = R * C
Bx = cos(a) * Cx - sin(a) * Cy
By = cos(a) * Cx + sin(a) * Cy
By + Bx = 2 * cos(a) * Cx
(By + Bx) / (2 * Cx) = cos(a)
...
(By - Bx) / (2 * Cy) = sin(a)
a = atan2(sin(a), cos(a)) <-- to get the right quadrant
And you have your angle, you can also do a quick check that cos(a) * cos(a) + sin(a) * sin(a) == 1 to make sure either you got all the calculations correct or that your system really is an orientation-preserving isometry (consists only of translations and rotations).
Now that we know a we know R and so to find A we do:
P1` = R(P1-A) + A
P1' - R*P1 = (I-R)A
A = (inverse(I-R)) * (P1' - R*P1)
where the inversion of a 2x2 matrix is easy.
EDIT: There is an error in the above, or more specifically one case that needs to be treated separately.
There is one combination of translations and rotations that does not reduce to a single rotation and that is a single translation. You can think of it in terms of fixed points (how many points are unchanged after the operation).
A translation has no fixed points (all points are changed) and a rotation has 1 fixed point (the axis doesn't change). It turns out that two rotations leave 1 fixed point and a translation and a rotation leaves 1 fixed point, which (with a little proof that says the number of fixed points tells you the operation performed) is the reason that arbitrary combinations of these result in a single rotation.
What this means for you is that if your angle comes out as 0 then using the method above will give you A = 0 as well, which is likely incorrect. In this case you have to do A = P1' - P1.
If I understood the question correctly, you have n points (X1,Y1),...,(Xn,Yn), the corresponding points, say, (x1,y1),...,(xn,yn) in another coordinate system, and the former are supposedly obtained from the latter by rotation, scaling and translation.
Note that this data does not determine the fixed point of rotation / scaling, or the order in which the operations "should" be applied. On the other hand, if you know these beforehand or choose them arbitrarily, you will find a rotation, translation and scaling factor that transform the data as supposed to.
For example, you can pick an any point, say, p0 = [X1, Y1]T (column vector) as the fixed point of rotation & scaling and subtract its coordinates from those of two other points to get p2 = [X2-X1, Y2-Y1]T, and p3 = [X3-X1, Y3-Y1]T. Also take the column vectors q2 = [x2-x1, y2-y1]T, q3 = [x3-x1, y3-y1]T. Now [p2 p3] = A*[q2 q3], where A is an unknwon 2x2 matrix representing the roto-scaling. You can solve it (unless you were unlucky and chose degenerate points) as A = [p2 p3] * [q2 q3]-1 where -1 denotes matrix inverse (of the 2x2 matrix [q2 q3]). Now, if the transformation between the coordinate systems really is a roto-scaling-translation, all the points should satisfy Pk = A * (Qk-q0) + p0, where Pk = [Xk, Yk]T, Qk = [xk, yk]T, q0=[x1, y1]T, and k=1,..,n.
If you want, you can quite easily determine the scaling and rotation parameter from the components of A or combine b = -A * q0 + p0 to get Pk = A*Qk + b.
The above method does not react well to noise or choosing degenerate points. If necessary, this can be fixed by applying, e.g., Principal Component Analysis, which is also just a few lines of code if MATLAB or some other linear algebra tools are available.

Finding initial speed and angle to hit a known position (parabolic trajectory)

I am currently doing a small turn based cannon game with XNA 4.0. The game is very simple: the player chooses the speed and angle at which he desires to shoot his rocket in order to hit another player. There is also a randomly generated wind vector that affects the X trajectory of the rocket. I would like to add an AI so that the player could play against the computer in a single player mode.
The way I would like to implement the AI is very simple: find the velocity and angle that would make the rocket hit the player directly, and add a random modifier to those fields so that the AI doesn't hit another player each time.
This is the code I use in order to update the position and speed of the rocket:
Vector2 gravity = new Vector2(0, (float)400); // 400 is the sweet spot value that i have found works best for the gravity
Vector2 totalAcceleration = gravity + _terrain.WindDirection;
float deltaT = (float)gameTime.ElapsedGameTime.TotalSeconds; // Elapsed time since last update() call
foreach (Rocket rocket in _instantiatedRocketList)
{
rocket.RocketSpeed += Vector2.Multiply(gravity, deltaT); // Only changes the Y component
rocket.RocketSpeed += Vector2.Multiply(_terrain.WindDirection, deltaT); // Only changes the X component
rocket.RocketPosition += Vector2.Multiply(rocket.RocketSpeed, deltaT) + Vector2.Multiply(totalAcceleration, (float)0.5) * deltaT * deltaT;
// We update the angle of the rocket accordingly
rocket.RocketAngle = (float)Math.Atan2(rocket.RocketSpeed.X, -rocket.RocketSpeed.Y);
rocket.CreateSmokeParticles(3);
}
I know that the basic equations to find the final X and Y coordinates are:
X = V0 * cos(theta) * totalFlightTime
Y = V0 * sin(theta) * totalFlightTime - 0.5 * g * totalFlightTime^2
where X and Y are the coordinates of the player I want to hit, V0 the initial speed, theta the angle at witch the rocket is shot, totalFlightTime is, like the name says, the total flight time of the rocket until it reaches (X, Y) and g is the gravity (400 in my game).
Questions:
What I am having problems with, is knowing where to add the wind in those formulas (is it just adding "+ windDirection * totalFlightTime" in the X = equation?), and also what to do with those equations in order to do what I want to do (finding the initial speed and theta angle) since there are 3 variables (V0, theta and totalFlightTime) and only 2 equations?
Thanks for your time.
You can do this as follows:
Assuming there is no specific limit to V0 (i.e. the robot can fire the rocket at any desired speed) and using the substitutions
T=totalFlightTime
Vx=V0cos(theta)
Vy=V0sin(theta)
Choose an arbitrary value for Vx. Now your first equation simplifies to
X=VxT so T=X/Vx
to solve for T. Now substitute the value of T into the second equation and solve for Vy
Y=VyT + gT^2/2 so Vy = (Y - gT^2/2)/T
Finally you can now solve for V0 and theta
V0 = Sqrt(Vx^2 + Vy^2) and Theta = aTan(Vy/Vx)
Note that your initial choice of Vx will determine the trajectory the missile will take - if Vx is large then T will be small and the trajectory will be almost a straight line (like a bullet fired at a nearby target) - if Vx is small then T will be large and the trajectory will be an arc (like a mortar round's path). You dis start with three variables (V0, totalFlightTime, and theta) but they are dependent variables so choosing any one (or in this case Vx) plus the two equations solves for the other two. You could also pre-determine flight time and solve for Vx, Vy, theta and V0, or predetermine theta (although this would be tricky as some theta wouldn't provide a real solution.

Finding translation and scale on two sets of points to get least square error in their distance?

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github

Finding the spin of a sphere given X, Y, and Z vectors relative to sphere

I'm using Electro in Lua for some 3D simulations, and I'm running in to something of a mathematical/algorithmic/physics snag.
I'm trying to figure out how I would find the "spin" of a sphere of a sphere that is spinning on some axis. By "spin" I mean a vector along the axis that the sphere is spinning on with a magnitude relative to the speed at which it is spinning. The reason I need this information is to be able to slow down the spin of the sphere by applying reverse torque to the sphere until it stops spinning.
The only information I have access to is the X, Y, and Z unit vectors relative to the sphere. That is, each frame, I can call three different functions, each of which returns a unit vector pointing in the direction of the sphere model's local X, Y and Z axes, respectively. I can keep track of how each of these change by essentially keeping the "previous" value of each vector and comparing it to the "new" value each frame. The question, then, is how would I use this information to determine the sphere's spin? I'm stumped.
Any help would be great. Thanks!
My first answer was wrong. This is my edited answer.
Your unit vectors X,Y,Z can be put together to form a 3x3 matrix:
A = [[x1 y1 z1],
[x2 y2 z2],
[x3 y3 z3]]
Since X,Y,Z change with time, A also changes with time.
A is a rotation matrix!
After all, if you let i=(1,0,0) be the unit vector along the x-axis, then
A i = X so A rotates i into X. Similarly, it rotates the y-axis into Y and the
z-axis into Z.
A is called the direction cosine matrix (DCM).
So using the DCM to Euler axis formula
Compute
theta = arccos((A_11 + A_22 + A_33 - 1)/2)
theta is the Euler angle of rotation.
The magnitude of the angular velocity, |w|, equals
w = d(theta)/dt ~= (theta(t+dt)-theta(t)) / dt
The axis of rotation is given by e = (e1,e2,e3) where
e1 = (A_32 - A_23)/(2 sin(theta))
e2 = (A_13 - A_31)/(2 sin(theta))
e3 = (A_21 - A_12)/(2 sin(theta))
I applaud ~unutbu's, answer, but I think there's a simpler approach that will suffice for this problem.
Take the X unit vector at three successive frames, and compare them to get two deltas:
deltaX1 = X2 - X1
deltaX2 = X3 - X2
(These are vector equations. X1 is a vector, the X vector at time 1, not a number.)
Now take the cross-product of the deltas and you'll get a vector in the direction of the rotation vector.
Now for the magnitude. The angle between the two deltas is the angle swept out in one time interval, so use the dot product:
dx1 = deltaX1/|deltaX1|
dx2 = deltax2/|deltaX2|
costheta = dx1.dx2
theta = acos(costheta)
w = theta/dt
For the sake of precision you should choose the unit vector (X, Y or Z) that changes the most.

Resources