What's the algorithm for computing a least squares plane in (x, y, z) space, given a set of 3D data points? In other words, if I had a bunch of points like (1, 2, 3), (4, 5, 6), (7, 8, 9), etc., how would one go about calculating the best fit plane f(x, y) = ax + by + c? What's the algorithm for getting a, b, and c out of a set of 3D points?
If you have n data points (x[i], y[i], z[i]), compute the 3x3 symmetric matrix A whose entries are:
sum_i x[i]*x[i], sum_i x[i]*y[i], sum_i x[i]
sum_i x[i]*y[i], sum_i y[i]*y[i], sum_i y[i]
sum_i x[i], sum_i y[i], n
Also compute the 3 element vector b:
{sum_i x[i]*z[i], sum_i y[i]*z[i], sum_i z[i]}
Then solve Ax = b for the given A and b. The three components of the solution vector are the coefficients to the least-square fit plane {a,b,c}.
Note that this is the "ordinary least squares" fit, which is appropriate only when z is expected to be a linear function of x and y. If you are looking more generally for a "best fit plane" in 3-space, you may want to learn about "geometric" least squares.
Note also that this will fail if your points are in a line, as your example points are.
The equation for a plane is: ax + by + c = z. So set up matrices like this with all your data:
x_0 y_0 1
A = x_1 y_1 1
...
x_n y_n 1
And
a
x = b
c
And
z_0
B = z_1
...
z_n
In other words: Ax = B. Now solve for x which are your coefficients. But since (I assume) you have more than 3 points, the system is over-determined so you need to use the left pseudo inverse. So the answer is:
a
b = (A^T A)^-1 A^T B
c
And here is some simple Python code with an example:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
N_POINTS = 10
TARGET_X_SLOPE = 2
TARGET_y_SLOPE = 3
TARGET_OFFSET = 5
EXTENTS = 5
NOISE = 5
# create random data
xs = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
ys = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
zs = []
for i in range(N_POINTS):
zs.append(xs[i]*TARGET_X_SLOPE + \
ys[i]*TARGET_y_SLOPE + \
TARGET_OFFSET + np.random.normal(scale=NOISE))
# plot raw data
plt.figure()
ax = plt.subplot(111, projection='3d')
ax.scatter(xs, ys, zs, color='b')
# do fit
tmp_A = []
tmp_b = []
for i in range(len(xs)):
tmp_A.append([xs[i], ys[i], 1])
tmp_b.append(zs[i])
b = np.matrix(tmp_b).T
A = np.matrix(tmp_A)
fit = (A.T * A).I * A.T * b
errors = b - A * fit
residual = np.linalg.norm(errors)
print("solution:")
print("%f x + %f y + %f = z" % (fit[0], fit[1], fit[2]))
print("errors:")
print(errors)
print("residual:")
print(residual)
# plot plane
xlim = ax.get_xlim()
ylim = ax.get_ylim()
X,Y = np.meshgrid(np.arange(xlim[0], xlim[1]),
np.arange(ylim[0], ylim[1]))
Z = np.zeros(X.shape)
for r in range(X.shape[0]):
for c in range(X.shape[1]):
Z[r,c] = fit[0] * X[r,c] + fit[1] * Y[r,c] + fit[2]
ax.plot_wireframe(X,Y,Z, color='k')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
unless someone tells me how to type equations here, let me just write down the final computations you have to do:
first, given points r_i \n \R, i=1..N, calculate the center of mass of all points:
r_G = \frac{\sum_{i=1}^N r_i}{N}
then, calculate the normal vector n, that together with the base vector r_G defines the plane by calculating the 3x3 matrix A as
A = \sum_{i=1}^N (r_i - r_G)(r_i - r_G)^T
with this matrix, the normal vector n is now given by the eigenvector of A corresponding to the minimal eigenvalue of A.
To find out about the eigenvector/eigenvalue pairs, use any linear algebra library of your choice.
This solution is based on the Rayleight-Ritz Theorem for the Hermitian matrix A.
See 'Least Squares Fitting of Data' by David Eberly for how I came up with this one to minimize the geometric fit (orthogonal distance from points to the plane).
bool Geom_utils::Fit_plane_direct(const arma::mat& pts_in, Plane& plane_out)
{
bool success(false);
int K(pts_in.n_cols);
if(pts_in.n_rows == 3 && K > 2) // check for bad sizing and indeterminate case
{
plane_out._p_3 = (1.0/static_cast<double>(K))*arma::sum(pts_in,1);
arma::mat A(pts_in);
A.each_col() -= plane_out._p_3; //[x1-p, x2-p, ..., xk-p]
arma::mat33 M(A*A.t());
arma::vec3 D;
arma::mat33 V;
if(arma::eig_sym(D,V,M))
{
// diagonalization succeeded
plane_out._n_3 = V.col(0); // in ascending order by default
if(plane_out._n_3(2) < 0)
{
plane_out._n_3 = -plane_out._n_3; // upward pointing
}
success = true;
}
}
return success;
}
Timed at 37 micro seconds fitting a plane to 1000 points (Windows 7, i7, 32bit program)
This reduces to the Total Least Squares problem, that can be solved using SVD decomposition.
C++ code using OpenCV:
float fitPlaneToSetOfPoints(const std::vector<cv::Point3f> &pts, cv::Point3f &p0, cv::Vec3f &nml) {
const int SCALAR_TYPE = CV_32F;
typedef float ScalarType;
// Calculate centroid
p0 = cv::Point3f(0,0,0);
for (int i = 0; i < pts.size(); ++i)
p0 = p0 + conv<cv::Vec3f>(pts[i]);
p0 *= 1.0/pts.size();
// Compose data matrix subtracting the centroid from each point
cv::Mat Q(pts.size(), 3, SCALAR_TYPE);
for (int i = 0; i < pts.size(); ++i) {
Q.at<ScalarType>(i,0) = pts[i].x - p0.x;
Q.at<ScalarType>(i,1) = pts[i].y - p0.y;
Q.at<ScalarType>(i,2) = pts[i].z - p0.z;
}
// Compute SVD decomposition and the Total Least Squares solution, which is the eigenvector corresponding to the least eigenvalue
cv::SVD svd(Q, cv::SVD::MODIFY_A|cv::SVD::FULL_UV);
nml = svd.vt.row(2);
// Calculate the actual RMS error
float err = 0;
for (int i = 0; i < pts.size(); ++i)
err += powf(nml.dot(pts[i] - p0), 2);
err = sqrtf(err / pts.size());
return err;
}
As with any least-squares approach, you proceed like this:
Before you start coding
Write down an equation for a plane in some parameterization, say 0 = ax + by + z + d in thee parameters (a, b, d).
Find an expression D(\vec{v};a, b, d) for the distance from an arbitrary point \vec{v}.
Write down the sum S = \sigma_i=0,n D^2(\vec{x}_i), and simplify until it is expressed in terms of simple sums of the components of v like \sigma v_x, \sigma v_y^2, \sigma v_x*v_z ...
Write down the per parameter minimization expressions dS/dx_0 = 0, dS/dy_0 = 0 ... which gives you a set of three equations in three parameters and the sums from the previous step.
Solve this set of equations for the parameters.
(or for simple cases, just look up the form). Using a symbolic algebra package (like Mathematica) could make you life much easier.
The coding
Write code to form the needed sums and find the parameters from the last set above.
Alternatives
Note that if you actually had only three points, you'd be better just finding the plane that goes through them.
Also, if the analytic solution in unfeasible (not the case for a plane, but possible in general) you can do steps 1 and 2, and use a Monte Carlo minimizer on the sum in step 3.
CGAL::linear_least_squares_fitting_3
Function linear_least_squares_fitting_3 computes the best fitting 3D
line or plane (in the least squares sense) of a set of 3D objects such
as points, segments, triangles, spheres, balls, cuboids or tetrahedra.
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Principal_component_analysis_ref/Function_linear_least_squares_fitting_3.html
It sounds like all you want to do is linear regression with 2 regressors. The wikipedia page on the subject should tell you all you need to know and then some.
All you'll have to do is to solve the system of equations.
If those are your points:
(1, 2, 3), (4, 5, 6), (7, 8, 9)
That gives you the equations:
3=a*1 + b*2 + c
6=a*4 + b*5 + c
9=a*7 + b*8 + c
So your question actually should be: How do I solve a system of equations?
Therefore I recommend reading this SO question.
If I've misunderstood your question let us know.
EDIT:
Ignore my answer as you probably meant something else.
We first present a linear least-squares plane fitting method that minimizes the residuals between the estimated normal vector and provided points.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver in C++ as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
We then discuss its equivalence to the typical SVD-based method and their comparison.
The aforementioned linear least-squares (LLS) method fits the general plane equation Ax + By + Cz + D = 0, whereas the SVD-based method replaces D with D = - (Ax0 + By0 + Cz0) and fits the plane equation A(x-x0) + B(y-y0) + C(z-z0) = 0, where (x0, y0, z0) is the mean of all points that serves as the origin of the new local coordinate frame.
Comparison between two methods:
The LLS fitting method is much faster than the SVD-based method, and is suitable for use when points are known to be roughly in a plane shape.
The SVD-based method is more numerically stable when the plane is far away from origin, because the LLS method would require more digits after decimal to be stored and processed in such cases.
The LLS method can detect outliers by checking the dot product residual between each point and the estimated normal vector, whereas the SVD-based method can detect outliers by checking if the smallest eigenvalue of the covariance matrix is significantly smaller than the two larger eigenvalues (i.e. checking the shape of the covariance matrix).
We finally provide a test case in C++ and MATLAB.
// Test case in C++ (using LLS fitting method)
matA(0,0) = 5.4637; matA(0,1) = 10.3354; matA(0,2) = 2.7203;
matA(1,0) = 5.8038; matA(1,1) = 10.2393; matA(1,2) = 2.7354;
matA(2,0) = 5.8565; matA(2,1) = 10.2520; matA(2,2) = 2.3138;
matA(3,0) = 6.0405; matA(3,1) = 10.1836; matA(3,2) = 2.3218;
matA(4,0) = 5.5537; matA(4,1) = 10.3349; matA(4,2) = 1.8796;
// With this sample data, LLS fitting method can produce the following result
// fitted normal vector = (-0.0231143, -0.0838307, -0.00266429)
// unit normal vector = (-0.265682, -0.963574, -0.0306241)
// D = 11.4943
% Test case in MATLAB (using SVD-based method)
points = [5.4637 10.3354 2.7203;
5.8038 10.2393 2.7354;
5.8565 10.2520 2.3138;
6.0405 10.1836 2.3218;
5.5537 10.3349 1.8796]
covariance = cov(points)
[V, D] = eig(covariance)
normal = V(:, 1) % pick the eigenvector that corresponds to the smallest eigenvalue
% normal = (0.2655, 0.9636, 0.0306)
Related
Circle center : Cx,Cy
Circle radius : a
Point from which we need to draw a tangent line : Px,Py
I need the formula to find the two tangents (t1x, t1y) and (t2x,t2y) given all the above.
Edit: Is there any simpler solution using vector algebra or something, rather than finding the equation of two lines and then solving equation of two straight lines to find the two tangents separately? Also this question is not off-topic because I need to write a code to find this optimally
Here is one way using trigonometry. If you understand trig, this method is easy to understand, though it may not give the exact correct answer when one is possible, due to the lack of exactness in trig functions.
The points C = (Cx, Cy) and P = (Px, Py) are given, as well as the radius a. The radius is shown twice in my diagram, as a1 and a2. You can easily calculate the distance b between points P and C, and you can see that segment b forms the hypotenuse of two right triangles with side a. The angle theta (also shown twice in my diagram) is between the hypotenuse and adjacent side a so it can be calculated with an arccosine. The direction angle of the vector from point C to point P is also easily found by an arctangent. The direction angles of the tangency points are the sum and difference of the original direction angle and the calculated triangle angle. Finally, we can use those direction angles and the distance a to find the coordinates of those tangency points.
Here is code in Python 3.
# Example values
(Px, Py) = (5, 2)
(Cx, Cy) = (1, 1)
a = 2
from math import sqrt, acos, atan2, sin, cos
b = sqrt((Px - Cx)**2 + (Py - Cy)**2) # hypot() also works here
th = acos(a / b) # angle theta
d = atan2(Py - Cy, Px - Cx) # direction angle of point P from C
d1 = d + th # direction angle of point T1 from C
d2 = d - th # direction angle of point T2 from C
T1x = Cx + a * cos(d1)
T1y = Cy + a * sin(d1)
T2x = Cx + a * cos(d2)
T2y = Cy + a * sin(d2)
There are obvious ways to combine those calculations and make them a little more optimized, but I'll leave that to you. It is also possible to use the angle addition and subtraction formulas of trigonometry with a few other identities to completely remove the trig functions from the calculations. However, the result is more complicated and difficult to understand. Without testing I do not know which approach is more "optimized" but that depends on your purposes anyway. Let me know if you need this other approach, but the other answers here give you other approaches anyway.
Note that if a > b then acos(a / b) will throw an exception, but this means that point P is inside the circle and there is no tangency point. If a == b then point P is on the circle and there is only one tangency point, namely point P itself. My code is for the case a < b. I'll leave it to you to code the other cases and to decide the needed precision to decide if a and b are equal.
Here's another way using complex numbers.
If a is the direction (a complex number of length 1) of the tangent point on the circle from the centre c, and d is the (real) length along the tangent to get to p, then (because the direction of the tangent is I*a)
p = c + r*a + d*I*a
rearranging
(r+I*d)*a = p-c
But a has length 1 so taking the length we get
|r+I*d| = |p-c|
We know everything but d, so we can solve for d:
d = +- sqrt( |p-c|*|p-c| - r*r)
and then find the a's and the points on the circle, one of each for each value of d above:
a = (p-c)/(r+I*d)
q = c + r*a
Hmm not really an algorithm question (people tend to mistake algorithm and equation) If you want to write a code then do (you did not specify language nor what prevents you from doing this which is the reason of close votes)... Without this info your OP is just asking for math equation which is indeed off-topic here and by answering this I risk (right-full) down-votes too (but this is/was asked a lot here with much less info and 4 reopen votes against 1 close put my decision weight on reopen and answering this anyway).
You can exploit the fact that you are in 2D as in 2D perpendicular vectors to vector a(x,y) are computed like this:
c = (-y, x)
d = ( y,-x)
c = -d
so you swap x,y and negate one (which one determines if the perpendicular vector is CW or CCW). It is really a rotation formula but as we rotate by 90deg the cos,sin are just +1 and -1.
Now normal n to any circumference point on circle lies in the line going through that point and circles center. So putting all this together your tangents are:
// normal
nx = Px-Cx
ny = Py-Cy
// tangent 1
tx = -ny
ty = +nx
// tangent 2
tx = +ny
ty = -nx
If you want unit vectors than just divide by radius a (not sure why you do not call it r like the rest of the math world) so:
// normal
nx = (Px-Cx)/a
ny = (Py-Cy)/a
// tangent 1
tx = -ny
ty = +nx
// tangent 2
tx = +ny
ty = -nx
Let's go through derivation process:
As you can see, if the interior of the square is < 0 it's because the point is interior to the circumferemce. When the point is outside of the circumference there are two solutions, depending on the sign of the square.
The rest is easy. Take atan(solution) and be carefull here with the signs, you may better do some checks.
Use (2) and then undo (1) transformations and that's all.
c# implementation of dmuir's answer:
static void FindTangents(Vector2 point, Vector2 circle, float r, out Line l1, out Line l2)
{
var p = new Complex(point.x, point.y);
var c = new Complex(circle.x, circle.y);
var cp = p - c;
var d = Math.Sqrt(cp.Real * cp.Real + cp.Imaginary * cp.Imaginary - r * r);
var q = GetQ(r, cp, d, c);
var q2 = GetQ(r, cp, -d, c);
l1 = new Line(point, new Vector2((float) q.Real, (float) q.Imaginary));
l2 = new Line(point, new Vector2((float) q2.Real, (float) q2.Imaginary));
}
static Complex GetQ(float r, Complex cp, double d, Complex c)
{
return c + r * (cp / (r + Complex.ImaginaryOne * d));
}
Move the circle to the origin, rotate to bring the point on X and downscale by R to obtain a unit circle.
Now tangency is achieved when the origin (0, 0), the (reduced) given point (d, 0) and an arbitrary point on the unit circle (cos t, sin t) form a right triangle.
cos t (cos t - d) + sin t sin t = 1 - d cos t = 0
From this, you draw
cos t = 1 / d
and
sin t = ±√(1-1/d²).
To get the tangency points in the initial geometry, upscale, unrotate and untranslate. (These are simple linear algebra operations.) Notice that there is no need to perform the direct transform explicitly. All you need is d, ratio of the distance center-point over the radius.
I have in mind the following experiment to run in Matlab and I am asking for an help to implement step (3). Any suggestion would be very appreciated.
(1) Consider the random variables X and Y both uniformly distributed on [0,1]
(2) Draw N realisation from the joint distribution of X and Y assuming that X and Y are independent (meaning that X and Y are uniformly jointly distributed on [0,1]x[0,1]). Each draw will be in [0,1]x[0,1].
(3) Transform each draw in [0,1]x[0,1] in a draw in [0,1] using the Hilbert space filling curve: under the Hilbert curve mapping, the draw in [0,1]x[0,1] should be the image of one (or more because of surjectivity) point(s) in [0,1]. I want pick one of these points. Is there any pre-built package in Matlab doing this?
I found this answer which I don't think does what I want as it explains how to obtain the Hilbert value of the draw (curve length from the start of curve to the picked point)
On wikipedia I found this code in C language (from (x,y) to d) which, again, does not fulfil my question.
EDIT This answer does not address updated version of the question, which explicitly asks about constructing Hilbert curve. Instead, this answer addresses a related question on construction of bijective mapping, and the relation to uniform distribution.
Your problem in not really well defined. If you only need the resulting distribution to be uniform, nothing is stopping you from simply picking f:(X,Y)->X. Result would be uniform regardless of whether X and Y are correlated. From your post I can only presume that what you want, in fact, is for the resulting transformation to be bijective, or as close to it as possible given machine precision limitations.
Worth noting that unless you need the algorithm that is best in preserving locality (which is clearly not required for resulting distribution to be bijective, not to mention uniform), there's no need to bother constructing Hilbert curves that you mention in your question. They have just as much to do with the solution as any other space-filling curve, and are incredibly computationally intensive.
So assuming you're looking for a bijective mapping, your question is equivalent to asking whether the set of points in a [unit] square has the same cardinality as the set of points in a [unit] line segment, and if it is, how to construct that bijection, i.e. 1-to-1 correspondence. The intuition says the square should have a higher cardinality, and Cantor spent 3 years trying to prove that, eventually proving quite the opposite - that these sets are in fact equinumerous. He was so surprised at his discovery that he wrote:
I see it, but I don't believe it!
The most commonly referred to bijection, fulfilling** this criteria, is the following. Represent x and y in their decimal form, i.e. x=0. x1 x2 x3 x4 x5..., and y=0. y1 y2 y3 y4 y5..., and let f:(X,Y)->Z be z=0. x1 y1 x2 y2 x3 y3 x4 y4 x5 y5..., i.e. alternating the decimals of the two numbers. The idea behind the bijection is trivial, though a rigorous proof requires quite a bit of prior knowledge.
** The caveat is that if we take e.g. x = 1/3 = 0.33333... and y = 1/5 = 0.199999... = 0.200000..., we can see there are two sequences corresponding to them: z = 0.313939393939... and z = 0.323030303030.... To overcome this obstacle we have to prove that adding a countable set to an uncountable one does not change the cardinality of the latter.
In reality we have to deal with machine precision and not pure math, which strictly speaking means both sets are actually finite and hence not equinumerous (assuming you store result with the same precision as original numbers). Which means we're simply forced to do some assumptions and loose some information, such as, in this case, the last half of significant digits of x and y. That is, unless we use a different data type that allows to store result with a double precision, compared to that of original variables.
Finally, sample implementation in Matlab:
x = rand();
y = rand();
chars = [num2str(x, '%.17f'); num2str(y, '%.17f')];
z = str2double(['0.' reshape(chars(:,3:end), 1, [])]);
>> cellstr(['x=' num2str(x, '%.17f'); 'y=' num2str(y, '%.17f'); 'z=' num2str(z, '%.17f')])
ans =
'x=0.65549803980353738'
'y=0.10975505072305158'
'z=0.61505947958500362'
Edit This answers the original request for a transformation f(x,y) -> t ~ U[0,1] given x,y ~ U[0,1], and additionally for x and y correlated. The updated question asks specifically for a Hilbert curve, H(x,y) -> t ~ U[0,1] and only for x,y ~ U[0,1] so this answer is no longer relevant.
Consider a random uniform sequence in [0,1] r1, r2, r3, .... You are assigning this sequence to pairs of numbers (x1,y1), (x2,y2), .... What you are asking for is a transformation on pairs (x,y) which yield a uniform random number in [0,1].
Consider the random subsequence r1, r3, ... corresponding to x1, x2, .... If you trust that your number generator is random and uncorrelated in [0,1], then the subsequence x1, x2, ... should also be random and uncorrelated in [0,1]. So the rather simple answer to the first part of your question is a projection onto either the x or y axis. That is, just pick x.
Next consider correlations between x and y. Since you haven't specified the nature of the correlation, let's assume a simple scaling of the axes,
such as x' => [0, 0.5], y' => [0, 3.0], followed by a rotation. The scaling doesn't introduce any correlation since x' and y' are still independent. You can generate it easily enough with a matrix multiply:
M1*p = [x_scale, 0; 0, y_scale] * [x; y]
for matrix M1 and point p. You can introduce a correlation by taking this stretched form and rotating it by theta:
M2*M1*p = [cos(theta), sin(theta); -sin(theta), cos(theta)]*M1*p
Putting it all together with theta = pi/4, or 45 degrees, you can see that larger values of y are correlated with larger values of x:
cos_t = sin_t = cos(pi/4); % at 45 degrees, sin(t) = cos(t) = 1/sqrt(2)
M2 = [cos_t, sin_t; -sin_t, cos_t];
M1 = [0.5, 0.0; 0.0, 3.0];
p = random(2,1000);
p_prime = M2*M1*p;
plot(p_prime(1)', p_prime(2)', '.');
axis('equal');
The resulting plot* shows a band of uniformly distributed numbers at a 45 degree angle:
Further transformations are possible with shear, and if you are clever about it, translation (OpenGL uses 4x4 transformation matrices so that translation can be represented as a linear transform matrix, with an extra dimension added before the transformation steps and removed before they are done).
Given a known affine correlation structure, you can transform back from random points (x',y') to points (x,y) where x and y are independent in [0,1] by solving Mk*...*M1 p = p_prime for p, or equivalently, by setting p = inv(Mk*...*M1) * p_prime, where p=[x;y]. Again, just pick x, which will be uniform in [0,1]. This doesn't work if the transformation matrix is singular, e.g., if you introduce a projection matrix Mj into the mix (though if the projection is the first step you can still recover).
* You may notice that the plot is from python rather than matlab. I don't have matlab or octave sitting in front of me right now, so I hope I got the syntax details right.
You could compute the hilbert curve from f(x,y)=z. Basically it's a hamiltonian path traversal. You can find a good description at Nick's spatial index hilbert curve quadtree blog. Or take a look at monotonic n-ary gray code. I've written an implementation based on Nick's blog in php:http://monstercurves.codeplex.com.
I will focus only on your last point
(3) Transform each draw in [0,1]x[0,1] in a draw in [0,1] using the Hilbert space filling curve: under the Hilbert curve mapping, the draw in [0,1]x[0,1] should be the image of one (or more because of surjectivity) point(s) in [0,1]. I want pick one of these points. Is there any pre-built package in Matlab doing this?
As far as I know, there aren't pre-built packages in Matlab doing this, but the good news is that the code on wikipedia can be called from MATLAB, and it is as simple as putting together the conversion routine with a gateway function in a xy2d.c file:
#include "mex.h"
// source: https://en.wikipedia.org/wiki/Hilbert_curve
// rotate/flip a quadrant appropriately
void rot(int n, int *x, int *y, int rx, int ry) {
if (ry == 0) {
if (rx == 1) {
*x = n-1 - *x;
*y = n-1 - *y;
}
//Swap x and y
int t = *x;
*x = *y;
*y = t;
}
}
// convert (x,y) to d
int xy2d (int n, int x, int y) {
int rx, ry, s, d=0;
for (s=n/2; s>0; s/=2) {
rx = (x & s) > 0;
ry = (y & s) > 0;
d += s * s * ((3 * rx) ^ ry);
rot(s, &x, &y, rx, ry);
}
return d;
}
/* The gateway function */
void mexFunction( int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
int n; /* input scalar */
int x; /* input scalar */
int y; /* input scalar */
int *d; /* output scalar */
/* check for proper number of arguments */
if(nrhs!=3) {
mexErrMsgIdAndTxt("MyToolbox:arrayProduct:nrhs","Three inputs required.");
}
if(nlhs!=1) {
mexErrMsgIdAndTxt("MyToolbox:arrayProduct:nlhs","One output required.");
}
/* get the value of the scalar inputs */
n = mxGetScalar(prhs[0]);
x = mxGetScalar(prhs[1]);
y = mxGetScalar(prhs[2]);
/* create the output */
plhs[0] = mxCreateDoubleScalar(xy2d(n,x,y));
/* get a pointer to the output scalar */
d = mxGetPr(plhs[0]);
}
and compile it with mex('xy2d.c').
The above implementation
[...] assumes a square divided into n by n cells, for n a power of 2, with integer coordinates, with (0,0) in the lower left corner, (n-1,n-1) in the upper right corner.
In practice, a discretization step is required before applying the mapping. As in every discretization problem, it is crucial to choose the precision wisely. The snippet below puts everything together.
close all; clear; clc;
% number of random samples
NSAMPL = 100;
% unit square divided into n-by-n cells
% has to be a power of 2
n = 2^2;
% quantum
d = 1/n;
N = 0:d:1;
% generate random samples
x = rand(1,NSAMPL);
y = rand(1,NSAMPL);
% discretization
bX = floor(x/d);
bY = floor(y/d);
% 2d to 1d mapping
dd = zeros(1,NSAMPL);
for iid = 1:length(dd)
dd(iid) = xy2d(n, bX(iid), bY(iid));
end
figure;
hold on;
axis equal;
plot(x, y, '.');
plot(repmat([0;1], 1, length(N)), repmat(N, 2, 1), '-r');
plot(repmat(N, 2, 1), repmat([0;1], 1, length(N)), '-r');
figure;
plot(1:NSAMPL, dd);
xlabel('# of sample')
Heading
I need to find the indices of the polygon nearest to a point
So in this case the ouput would be 4 and 0. Such that if the red point is added I know to where to place the vertex in the array. Does anyone know where to start?
(Sorry if the title is misleading, I wasnt sure how to phrase it properly)
In this case the ouput would be 0 and 1, rather than the closest 4.
Point P lies on the segment AB, if two simple conditions are met together:
AP x PB = 0 //cross product, vectors are collinear or anticollinear, P lies on AB line
AP . PB > 0 //scalar product, exclude anticollinear case to ensure that P is inside the segment
So you can check all sequential vertice pairs (pseudocode):
if (P.X-V[i].X)*(V[i+1].Y-P.Y)-(P.Y-V[i].Y)*(V[i+1].X-P.X)=0 then
//with some tolerance if point coordinates are float
if (P.X-V[i].X)*(V[i+1].X-P.X)+(P.Y-V[i].Y)*(V[i+1].Y-P.Y)>0
then P belongs to (i,i+1) segment
This is fast direct (brute-force) method.
Special data structures exist in computer geometry to quickly select candidate segments - for example, r-tree. But these complicated methods will gain for long (many-point) polylines and for case where the same polygon is used many times (so pre-treatment is negligible)
I'll assume that the new point is to be added to an edge. So you are given the coordinates of a point a = (x, y) and you want to find the indices of the edge on which it lies. Let's call the vertices of that edge b, c. Observe that the area of the triangle abc is zero.
So iterate over all edges and choose the one that minimizes area of triangle abc where a is your point and bc is current edge.
a = input point
min_area = +infinity
closest_edge = none
n = number of vertices in polygon
for(int i = 1; i <= n; i++)
{ b = poly[ i - 1 ];
c = poly[ i % n ];
if(area(a, b, c) < min_area)
{ min_area = area(a, b, c);
closest_edge = bc
}
}
You can calculate area using:
/* Computes area x 2 */
int area(a, b, c)
{ int ans = 0;
ans = (a.x*b.y + b.x*x.y + c.x*a.y) - (a.y*b.x + b.y*c.x + c.y*a.x);
return ABS(ans);
}
I think you would be better off trying to compare the distance from the actual point to a comparable point on the line. The closest comparable point would be the one that forms a perpendicular line like this. a is your point in question and b is the comparable point on the line line between the two vertices that you will check distance to.
However there's another method which I think might be more optimal for this case (as it seems most of your test points lie pretty close to the desired line already). Instead of find the perpendicular line point we can simply check the point on the line that has the same X value like this. b in this case is a lot easier to calculate:
X = a.X - 0.X;
Slope = (1.Y - 0.Y) / (1.X - 0.X);
b.X = 0.X + X;
b.Y = 0.Y + (X * Slope);
And the distance is simply the difference in Y values between a and b:
distance = abs(a.Y - b.Y);
One thing to keep in mind is that this method will become more inaccurate as the slope increases as well as become infinite when the slope is undefined. I would suggest flipping it when the slope > 1 and checking for a b that lies at the same y rather than x. That would look like this:
Y = a.Y - 0.Y;
Inverse_Slope = (1.X - 0.X) / (1.Y - 0.Y);
b.Y = 0.Y + Y;
b.X = 0.Y + (Y * Inverse_Slope);
distance = abs(a.X - b.X);
Note: You should also check whether b.X is between 0.X and 1.X and b.Y is between 0.Y and 1.Y in the second case. That way we are not checking against points that dont lie on the line segment.
I admit I don't know the perfect terminology when it comes to this kind of thing so it might be a little confusing, but hope this helps!
Rather than checking if the point is close to an edge with a prescribed tolerance, as MBo suggested, you can fin the edge with the shortest distance to the point. The distance must be computed with respect to the line segment, not the whole line.
How do you compute this distance ? Let P be the point and Q, R two edge endpoints.
Let t be in range [0,1], you need to minimize
D²(P, QR) = D²(P, Q + t QR) = (PQ + t QR)² = PQ² + 2 t PQ.QR + t² QR².
The minimum is achieved when the derivative cancels, i.e. t = - PQ.QR / QR². If this quantity exceeds the range [0,1], just clamp it to 0 or 1.
To summarize,
if t <= 0, D² = PQ²
if t >= 1, D² = PR²
otherwise, D² = PQ² - t² QR²
Loop through all the vertices, calculate the distance of that vertex to the point, find the minimum.
double min_dist = Double.MAX_VALUE;
int min_index=-1;
for(int i=0;i<num_vertices;++i) {
double d = dist(vertices[i],point);
if(d<min_dist) {
min_dist = d;
min_index = i;
}
}
I am trying to find the angle of the outer line of the object in the green region of the image as shown in the image above…
For that, I have scanned the green region and get the points (dark blue points as shown in the image)...
As you can see, the points are not making straight line so I can’t find angle easily.
So I think I have to find a middle way and
that is to find the line so that the distance between each point and line remain as minimum as possible.
So how can I find the line so that each point exposes minimum distance to it……?
Is there any algorithm for this or is there any good way other than this?
The obvious route would be to do a least-squares linear regression through the points.
The standard least squares regression formulae for x on y or y on x assume there is no error in one coordinate and minimize the deviations in the coordinate from the line.
However, it is perfectly possible to set up a least squares calculation such that the value minimized is the sum of squares of the perpendicular distances of the points from the lines. I'm not sure whether I can locate the notebooks where I did the mathematics - it was over twenty years ago - but I did find the code I wrote at the time to implement the algorithm.
With:
n = ∑ 1
sx = ∑ x
sx2 = ∑ x2
sy = ∑ y
sy2 = ∑ y2
sxy = ∑ x·y
You can calculate the variances of x and y and the covariance:
vx = sx2 - ((sx * sx) / n)
vy = sy2 - ((sy * sy) / n)
vxy = sxy - ((sx * sy) / n)
Now, if the covariance is 0, then there is no semblance of a line. Otherwise, the slope and intercept can be found from:
slope = quad((vx - vy) / vxy, vxy)
intcpt = (sy - slope * sx) / n
Where quad() is a function that calculates the root of quadratic equation x2 + b·x - 1 with the same sign as c. In C, that would be:
double quad(double b, double c)
{
double b1;
double q;
b1 = sqrt(b * b + 4.0);
if (c < 0.0)
q = -(b1 + b) / 2;
else
q = (b1 - b) / 2;
return (q);
}
From there, you can find the angle of your line easily enough.
Obviously the line will pass through averaged point (x_average,y_average).
For direction you may use the following algorithm (derived directly from minimizing average square distance between line and points):
dx[i]=x[i]-x_average;
dy[i]=y[i]-y_average;
a=sum(dx[i]^2-dy[i]^2);
b=sum(2*dx[i]*dy[i]);
direction=atan2(b,a);
Usual linear regression will not work here, because it assumes that variables are not symmetric - one depends on other, so if you will swap x and y, you will have another solution.
The hough transform might be also a good option:
http://en.wikipedia.org/wiki/Hough_transform
You might try searching for "total least squares", or "least orthogonal distance" but when I tried that I saw nothing immediately applicable.
Anyway suppose you have points x[],y[], and the line is represented by a*x+b*y+c = 0, where hypot(a,b) = 1. The least orthogonal distance line is the one that minimises Sum{ (a*x[i]+b*y[i]+c)^2}. Some algebra shows that:
c is -(a*X+b*Y) where X is the mean of the x's and Y the mean of the y's.
(a,b) is the eigenvector of C corresponding to it's smaller eigenvalue, where C is the covariance matrix of the x's and y's
I am trying to determine whether a line segment (i.e. between two points) intersects a sphere. I am not interested in the position of the intersection, just whether or not the segment intersects the sphere surface. Does anyone have any suggestions as to what the most efficient algorithm for this would be? (I'm wondering if there are any algorithms that are simpler than the usual ray-sphere intersection algorithms, since I'm not interested in the intersection position)
If you are only interested if knowing if it intersects or not then your basic algorithm will look like this...
Consider you have the vector of your ray line, A -> B.
You know that the shortest distance between this vector and the centre of the sphere occurs at the intersection of your ray vector and a vector which is at 90 degrees to this which passes through the centre of the sphere.
You hence have two vectors, the equations of which fully completely defined. You can work out the intersection point of the vectors using linear algebra, and hence the length of the line (or more efficiently the square of the length of the line) and test if this is less than the radius (or the square of the radius) of your sphere.
I don't know what the standard way of doing it is, but if you only want to know IF it intersects, here is what I would do.
General rule ... avoid doing sqrt() or other costly operations. When possible, deal with the square of the radius.
Determine if the starting point is inside the radius of the sphere. If you know that this is never the case, then skip this step. If you are inside, your ray will intersect the sphere.
From here on, your starting point is outside the sphere.
Now, imagine the small box that will fit sphere. If you are outside that box, check the x-direction, y-direction and z-direction of the ray to see if it will intersect the side of the box that your ray starts at. This should be a simple sign check, or comparison against zero. If you are outside the and moving away from it, you will never intersect it.
From here on, you are in the more complicated phase. Your starting point is between the imaginary box and the sphere. You can get a simplified expression using calculus and geometry.
The gist of what you want to do is determine if the shortest distance between your ray and the sphere is less than radius of the sphere.
Let your ray be represented by (x0 + it, y0 + jt, z0 + kt), and the centre of your sphere be at (xS, yS, zS). So, we want to find t such that it would give the shortest of (xS - x0 - it, yS - y0 - jt, zS - z0 - kt).
Let x = xS - x0, y = yX - y0, z = zS - z0, D = magnitude of the vector squared
D = x^2 -2*xit + (i*t)^2 + y^2 - 2*yjt + (j*t)^2 + z^2 - 2*zkt + (k*t)^2
D = (i^2 + j^2 + k^2)t^2 - (xi + yj + zk)*2*t + (x^2 + y^2 + z^2)
dD/dt = 0 = 2*t*(i^2 + j^2 + k^2) - 2*(xi + yj + z*k)
t = (xi + yj + z*k) / (i^2 + j^2 + k^2)
Plug t back into the equation for D = .... If the result is less than or equal the square of the sphere's radius, you have an intersection. If it is greater, then there is no intersection.
This page has an exact solution for this problem. Essentially, you are substituting the equation for the line into the equation for the sphere, then computes the discriminant of the resulting quadratic. The values of the discriminant indicate intersection.
Are you still looking for an answer 13 years later? Here is a complete and simple solution
Assume the following:
the line segment is defined by endpoints as 3D vectors v1 and v2
the sphere is centered at vc with radius r
Ne define the three side lengths of a triangle ABC as:
A = v1-vc
B = v2-vc
C = v1-v2
If |A| < r or |B| < r, then we're done; the line segment intersects the sphere
After doing the check above, if the angle between A and B is acute, then we're done; the line segment does not intersect the sphere.
If neither of these conditions are met, then the line segment may or may not intersect the sphere. To find out, we just need to find H, which is the height of the triangle ABC taking C as the base. First we need φ, the angle between A and C:
φ = arccos( dot(A,C) / (|A||C|) )
and then solve for H:
sin(φ) = H/|A|
===> H = |A|sin(φ) = |A| sqrt(1 - (dot(A,C) / (|A||C|))^2)
and we are done. The result is
if H < r, then the line segment intersects the sphere
if H = r, then the line segment is tangent to the sphere
if H > r, then the line segment does not intersect the sphere
Here that is in Python:
import numpy as np
def unit_projection(v1, v2):
'''takes the dot product between v1, v2 after normalization'''
u1 = v1 / np.linalg.norm(v1)
u2 = v2 / np.linalg.norm(v2)
return np.dot(u1, u2)
def angle_between(v1, v2):
'''computes the angle between vectors v1 and v2'''
return np.arccos(np.clip(unit_projection(v1, v2), -1, 1))
def check_intersects_sphere(xa, ya, za, xb, yb, zb, xc, yc, zc, radius):
'''checks if a line segment intersects a sphere'''
v1 = np.array([xa, ya, za])
v2 = np.array([xb, yb, zb])
vc = np.array([xc, yc, zc])
A = v1 - vc
B = v2 - vc
C = v1 - v2
if(np.linalg.norm(A) < radius or np.linalg.norm(B) < radius):
return True
if(angle_between(A, B) < np.pi/2):
return False
H = np.linalg.norm(A) * np.sqrt(1 - unit_projection(A, C)**2)
if(H < radius):
return True
if(H >= radius):
return False
Note that I have written this so that it returns False when either endpoint is on the surface of the sphere, or when the line segment is tangent to the sphere, because it serves my purposes better.
This might be essentially what user Cruachan suggested. A comment there suggests that other answers are "too elaborate". There might be a more elegant way to implement this that uses more compact linear algebra operations and identities, but I suspect that the amount of actual compute required boils down to something like this. If someone sees somewhere to save some effort please do let us know.
Here is a test of the code. The figure below shows several trial line segments originating from a position (-1, 1, 1) , with a unit sphere at (1,1,1). Blue line segments have intersected, red have not.
And here is another figure which verifies that line segments that stop just short of the sphere's surface do not intersect, even if the infinite ray that they belong to does:
Here is the code that generates the image:
import matplotlib.pyplot as plt
radius = 1
xc, yc, zc = 1, 1, 1
xa, ya, za = xc-2, yc, zc
nx, ny, nz = 4, 4, 4
xx = np.linspace(xc-2, xc+2, nx)
yy = np.linspace(yc-2, yc+2, ny)
zz = np.linspace(zc-2, zc+2, nz)
n = nx * ny * nz
XX, YY, ZZ = np.meshgrid(xx, yy, zz)
xb, yb, zb = np.ravel(XX), np.ravel(YY), np.ravel(ZZ)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i in range(n):
if(xb[i] == xa): continue
intersects = check_intersects_sphere(xa, ya, za, xb[i], yb[i], zb[i], xc, yc, zc, radius)
color = ['r', 'b'][int(intersects)]
s = [0.3, 0.7][int(intersects)]
ax.plot([xa, xb[i]], [ya, yb[i]], [za, zb[i]], '-o', color=color, ms=s, lw=s, alpha=s/0.7)
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
x = np.outer(np.cos(u), np.sin(v)) + xc
y = np.outer(np.sin(u), np.sin(v)) + yc
z = np.outer(np.ones(np.size(u)), np.cos(v)) + zc
ax.plot_surface(x, y, z, rstride=4, cstride=4, color='k', linewidth=0, alpha=0.25, zorder=0)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.tight_layout()
plt.show()
you sorta have to work that the position anyway if you want accuracy. The only way to improve speed algorithmically is to switch from ray-sphere intersection to ray-bounding-box intersection.
Or you could go deeper and try and improve sqrt and other inner function calls
http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection