Find rotation needed to transform one point on sphere to another - algorithm

I'm trying to make a basic sphere animation.
So far, I have some points lying on a sphere, which can be rotated around its origin by using the rotation matrices found on Wikipedia:
I now want to rotate the sphere so that a specific point is at the front (at [0, 0, 1]). I think that I might have to find the angle between the two vectors in each plane and rotate by this amount, but I'm not sure if this is correct, or how it can be achieved.
So, how would I find the angles required to do this?

In spherical coordinates, your point is at (sin θ cos φ, sin θ sin φ, cos θ) where θ = arccos(z) and φ = atan2(y, x).
Beware of conventions: here, θ is the polar angle or inclination and φ is the azimuthal angle or azimuth.
Since you want to move your point to (0, 0, 1), you can first set φ to zero with a rotation around the z axis, and then set θ to zero with a rotation around the y axis.
The first rotation is Rz(-φ):
cos φ sin φ 0
-sin φ cos φ 0
0 0 1
The second rotation is Ry(-θ):
cos θ 0 -sin θ
0 1 0
sin θ 0 cos θ
The composition is Ry(-θ) * Rz(-φ):
cos θ cos φ cos θ sin φ -sin θ
-sin φ cos φ 0
sin θ cos φ sin θ sin φ cos θ
Note that the last row is (x, y, z), which confirms that this point will move to (0, 0, 1).
Another way to construct a rotation matrix that takes (x, y, z) to (0, 0, 1), is to construct the inverse (that takes (0, 0, 1) to (x, y, z)) and then transpose it. You need three basis vectors that are perpendicular to each other, one for each column of the matrix. However, since we will transpose the result at the end, we can just consider these vectors to be the rows of the final matrix.
The first vector is V3 = (x, y, z), and this goes into the third row (since we want to move it to the z unit vector). The two other vectors can be computed using the cross-product with some other arbitrary vector. Many games and 3D engines use a "look-at" function that takes an "up" vector because the world usually has a sense of up and down.
Let's take UP = (0, 1, 0) as our "up" vector. You can compute V1 = norm(cross(UP, V3)) and V2 = cross(V3, V1). Depending on the order of arguments, you can flip the sphere (you can also multiply one of the vectors by -1). We don't need to normalize V2 since V1 and `V3 are both already unit vectors.
So the vectors are:
V3 = (x, y, z)
V1 = norm(cross(UP, V3)) = (z/sqrt(xx+zz), 0, -x/sqrt(xx+zz))
V2 = cross(V3, V1) = (-xy/sqrt(xx+zz), sqrt(xx+zz), -yz/sqrt(xx+zz))
And the final rotation matrix, with S = sqrt(xx+zz), is:
z/S 0 -x/S
-xy/S S -yz/S
x y z
Note that it's different from the one we obtained from Ry(-θ) * Rz(-φ). There are an infinite number of rotation matrices that move a point to another point, because rotations have three degrees of freedom and moving on a surface only has two if you don't consider the final orientation. You can get other results by changing the "up" vector.

Related

Computer representation of quaternions that's exact for 90-degree rotations?

Unit quaternions have several advantages over 3x3 orthogonal matrices
for representing 3d rotations on a computer.
However, one thing that has been disappointing me about the unit quaternion
representation is that axis-aligned 90 degree rotations
aren't exactly representable. For example, a 90-degree rotation around the z axis, taking the +x axis to the +y axis, is represented as [w=sqrt(1/2), x=0, y=0, z=sqrt(1/2)].
Surprising/unpleasant consequences include:
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v
often doesn't rotate v by exactly 90 degrees
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v four times
often doesn't yield exactly v
squaring a floating-point-quaternion representing a 90 degree rotation around a coordinate axis
doesn't exactly yield the (exactly representable) 180 degree rotation
around that coordinate axis,
and raising it to the eighth power doesn't yield the identity quaternion.
Because of this unfortunate lossiness of the quaternion representation on "nice" rotations,
I still sometimes choose 3x3 matrices for applications in which I'd like axis-aligned
90 degree rotations, and compositions of them,
to be exact and floating-point-roundoff-error-free.
But the matrix representation isn't ideal either,
since it loses the sometimes-needed double covering property
(i.e. quaternions distinguish between the identity and a 360-degree rotation,
but 3x3 rotation matrices don't) as well as other familiar desirable numerical properties
of the quaternion representation, such as lack of need for re-orthogonalization.
My question: is there a computer representation of unit quaternions that does not suffer this
imprecision, and also doesn't lose the double covering property?
One solution I can think of is to represent each of the 4 elements of the quaternion
as a pair of machine-representable floating-point numbers [a,b], meaning a + b √2.
So the representation of a quaternion would consist of eight floating-point numbers.
I think that works, but it seems rather heavyweight;
e.g. when computing the product of a long sequence of quaternions,
each multiplication in the simple quaternion calculation would turn into
4 floating-point multiplications and 2 floating-point additions,
and each addition would turn into 2 floating-point additions. From the point of view of trying to write a general-purpose library implementation, all that extra computation and storage seems pointless as soon as there's a factor that's not one of these "nice" rotations.
Another possible solution would be to represent each quaternion q=w+xi+yj+zk
as a 4-tuple [sign(w)*w2, sign(x)*x2, sign(y)*y2, sign(z)*z2].
This representation is concise and has the desired non-lossiness for the subgroup of
interest, but I don't know how to multiply together two quaternions in this representation.
Yet another possible approach would be to store the quaternion q2
instead of the usual q. This seems promising at first,
but, again, I don't know how to non-lossily multiply
two of these representions together on the computer, and furthermore
the double-cover property is evidently lost.
You probably want to check the paper "Algorithms for manipulating quaternions in floating-point arithmetic" published in 2020 available online:
https://hal.archives-ouvertes.fr/hal-02470766/file/quaternions.pdf
Which shows how to perform exact computations and avoid unbounded numerical errors.
EDIT:
You can get rid of the square roots by using an unnormalized (i.e., non-unit) quaternion. Let me explain the idea.
Having two unit 3D-vectors X and Y, represented as pure quaternions, the quaternion Q rotating X to Y is
Q = X * (X + Y) / |X * (X + Y)|
The denominator, which is taking the norm, is the problem since it involves a square root.
You can see it by expanding the expresion as:
Q = (1 + X * Y) / sqrt(2 + 2*(X • Y))
If we replace X = i and Y = j to see the 90 degree rotation you get:
Q = (1 + k) / sqrt(2)
So, |1 + k| = sqrt(2). But we can actually use the unnormalized quaternion Q = 1 + k to perform rotations, all we need to do is to normalize the rotated result by the
SQUARED norm of the quaternion.
For example, the squared norm of Q = 1 + k is |1 + k|^2 = 2 (and that is exact as you never took the square root) lets apply the unnormalized quaternion to the vector X = i:
= (1 + k) i (1 - k)
= (i + k * i - i * k - k * i * k)
= (i + 2 j - i)
= 2 j
To get the correct result we divide by squared norm.
I haven't tested but I believe you would get exact results by applying unnormalized quaternions to your vectors and then dividing the result by the squared norm.
The algorithm would be
Create the unnormalized quaternion Q = X * (X + Y)
Apply Q to your vectors as: v' = Q * v * ~Q
Normalize by squared norm: v'' = v' / |Q|^2

The largest aligned rectangular with fixed aspect ratio in an arbitrary convex polygon?

How to calculate the width / height of the largest aligned rectangular with fixed aspect ratio in an arbitrary convex polygon?
The examples of such rectangles (red) in the different convex polygons (black) are presented on this image:
.
I found different papers on subject, but they are not fit to set of my limitations. That's weird, because they should significally simplify the algorithm, but unfortunately I didn't found any clue of it.
Fixed ratio does simplify the problem since now there's a linear program.
You want to find x1, y1, x2, y2 to maximize x2 − x1 subject to the constraints that (x2 − x1) h = w (y2 − y1) where the aspect ratio is w:h, and for each linear inequality defining the convex polygon, each of the points (x1, y1), (x1, y2), (x2, y1), (x2, y2) satisfy it.
For example, consider this convex polygon:
The linear program for the rectangle with four points (x_1, y_1), (x_1, y_2), (x_2, y_2), (x_2, y_1), aspect ratio r = w / h and inscribed to the polygon above will be following:
In theory, there are specialized algorithms for low-dimensional linear programming that run in linear time. In practice, you can throw a solver at it. If you want to code your own, then you could do the simplex method, but gradient descent is even simpler.
First let's get rid of the equality constraint and a variable by maximizing z over variables x, y, z subject to the point-in-polygon constraints for x1 = (x − w z), y1 = (y − h z), x2 = (x + w z), y2 = (y + h z). Second let's trade in those constraints for an objective term. Normally a point-in-polygon constraint will look like (signed distance to half-plane) ≤ 0. Instead we're going to apply a penalty term to the objective. Let α > 0 be a parameter. The new term is −exp(α (signed distance to half plane)). If the signed distance is negative (the point is inside half-plane), then the penalty will go to zero as α goes to infinity. If the signed distance is positive, then the penalty goes to minus infinity. If we make α large enough, then the optimal solution of the transformed problem will be approximately feasible.
This is what it looks like in Python. I'm not a continuous optimization expert, so caveat emptor.
# Aspect ratio.
w = 3
h = 2
# Represented as intersection of half-spaces a*x + b*y - c <= 0 given below as
# (a, b, c). For robustness, these should be scaled so that a**2 + b**2 == 1,
# but it's not strictly necessary.
polygon = [(1, 1, 20), (1, -2, 30), (-2, 1, 40)]
# Initial solution -- take the centroid of three hull points. Cheat by just
# using (0, 0) here.
x, y, z = (0, 0, 0)
from math import exp
# Play with these.
alpha = 10
rate = 0.02
for epoch in range(5):
for iteration in range(10 ** epoch, 10 ** (epoch + 1)):
# Compute the gradient of the objective function. Absent penalties, we
# only care about how big the rectangle is, not where.
dx, dy, dz = (0, 0, 1)
# Loop through the polygon boundaries, applying penalties.
for a, b, c in polygon:
for u in [-w, w]:
for v in [-h, h]:
term = -exp(alpha * (a * (x + u * z) + b * (y + v * z) - c))
dx += alpha * a * term
dy += alpha * b * term
dz += alpha * (a * u + b * v) * term
x += rate * dx
y += rate * dy
z += rate * dz
print(x, y, z)
Hint:
WLOG the rectangle can be a square (stretch space if necessary).
Then the solution is given by the highest point of the distance map to the polygon, in the Chebyshev metric (L∞). It can be determined from the medial axis transform, itself obtained from the Voronoi diagram.

Find evenly distributed random points on a spherical cap

I have a latitude, longitude and radius of 400m-1000m forming a spherical cap. I need to find a random point on that cap. The points must be evenly distributed over the area.
There is a related question about finding random points in a circle. My first thought was to project the cap on to a Cartesian plane and using the circle algorithm. The radius is small enough that there should be no important level of error.
I'm not sure if projecting and then converting the point back to a lat/lng is the simplest solution or what other possible solutions there are to this problem
You can generate random azimuth in range 0..360 and random distance with sqrt-distribution to provide unifrom distribution
d = maxR * Sqrt(random(0..1))
theta = random(0..1) * 2 * Pi
Then get geopoint coordinates using bearing and distance as described here (Destination point given distance and bearing from start point)
φ2 = asin( sin φ1 ⋅ cos δ + cos φ1 ⋅ sin δ ⋅ cos θ )
λ2 = λ1 + atan2( sin θ ⋅ sin δ ⋅ cos φ1, cos δ − sin φ1 ⋅ sin φ2 )
where φ is latitude, λ is longitude, θ is the bearing
(clockwise from north), δ is the angular distance d/R;
d being the distance travelled, R the earth’s radius
As mentioned in the wiki page theta + phi = 90 if phi is a latitude. On the other hand, as r is fixed for all points on the cap, we just need set the value of the theta. Hence, you can pick a random value from 0 to theta value (related to the cap) and define the point by the explained constraints.
For a disc very small compared to the radius of the sphere the lat-long projection will simply be approximately an ellipse except when you are very close to poles.
First compute the stretch at the given latitude:
double k = cos(latitude * PI / 180);
then compute the disc radius in latitude degrees
// A latitude arc-second is 30.87 meters
double R = radius / 30.87 / 3600 * PI / 180;
then compute a uniform random point in a circle
double a = random() * 2 * PI;
double r = R * sqrt(random());
your random point in the disc will be
double random_lat = (latitude*PI/180 + r*cos(a))/PI*180;
double random_longitude = (longitude*PI/180 + (r/k)*sin(a))/PI*180;

Hough transformation makes no sense

I have been very confused about this, so the basic idea of Hough's line discovery is that any line can be represented by a unique r and θ.
r = x.cos(θ) + y.sin(θ)
And further, every pixel on a given line will transform to the exact same r and θ for that line
but this assumption fails for the simplest of lines.
In the given plot, two points are on the same line, but their r is different
explanation??
It is tempting to say that for the points (1, 1) and (3, 3), θ is π / 4 and r is the square root of 2 and 3 respectively.
These would be the polar coordinates of the points. But you can't expect two distinct points to have the same r and θ... How would you draw them?
r and θ are not the polar coordinates of the points here.
r is the distance from the origin to the line and θ is an angle such that the vector (cos θ, sin θ) is orthogonal to the line.
(See http://en.wikipedia.org/wiki/Hough_transform for a figure my reputation does not allow me to post here).
With r and θ fixed in this way, all the points M=(x, y) on the line verify the equation: r = x.cos(θ) + y.sin(θ).
The right side can be seen as the dot product between the vector OM (where O is the origin) and the vector (cos θ, sin θ).
In your example, r would be 0 and θ would be 3π / 4.
For lines that do not go through the origin, (r, θ) are the polar coordinates of one point only on the line, which is the point closest to the origin.
Hope this helps.

Avoiding sin() calls in audio synth

A naive sine wave generator takes a set of n values and calls the sin function on each of them:
for i = 0; i < 2*pi ; i = i+step {
output = append(output, sin(i) )
}
However, this makes a lot of calls to a potentially expensive sin function, and fails to take advantage of the fact that all the samples are sequential, have been calculated previously, and will be rounded to an integer (PCM). So, what alternatives are there?
I'm imagining something along the lines of Bresenham's circle algorithm or pre-computing a high-res sample, and then downsizing by taking every n'th entry, but if there's an 'industrial strength' solution to this problem, I'd love to hear it.
You can calculate the vector z which gives you (cos theta, sin theta) when you add to (1,0), where theta = 2*pi/step. Then you add this vector to (1,0) and get the next sin value as the y-coordinate of the sum. Then you rotate z by angle theta (by multiplying by the rotation matrix through angle theta) and add this to your previous vector (cos theta, sin theta), to get the next sin value as the y-coordinate of the resultant sum vector. And so forth. This requires computing cos theta and sin theta just once, and then each update is given by a matrix multiplication of a 2x2 matrix with a 2-d vector, and then a simple addition, which is faster than computing sin() using the power series expansion.

Resources