Related
Circle center : Cx,Cy
Circle radius : a
Point from which we need to draw a tangent line : Px,Py
I need the formula to find the two tangents (t1x, t1y) and (t2x,t2y) given all the above.
Edit: Is there any simpler solution using vector algebra or something, rather than finding the equation of two lines and then solving equation of two straight lines to find the two tangents separately? Also this question is not off-topic because I need to write a code to find this optimally
Here is one way using trigonometry. If you understand trig, this method is easy to understand, though it may not give the exact correct answer when one is possible, due to the lack of exactness in trig functions.
The points C = (Cx, Cy) and P = (Px, Py) are given, as well as the radius a. The radius is shown twice in my diagram, as a1 and a2. You can easily calculate the distance b between points P and C, and you can see that segment b forms the hypotenuse of two right triangles with side a. The angle theta (also shown twice in my diagram) is between the hypotenuse and adjacent side a so it can be calculated with an arccosine. The direction angle of the vector from point C to point P is also easily found by an arctangent. The direction angles of the tangency points are the sum and difference of the original direction angle and the calculated triangle angle. Finally, we can use those direction angles and the distance a to find the coordinates of those tangency points.
Here is code in Python 3.
# Example values
(Px, Py) = (5, 2)
(Cx, Cy) = (1, 1)
a = 2
from math import sqrt, acos, atan2, sin, cos
b = sqrt((Px - Cx)**2 + (Py - Cy)**2) # hypot() also works here
th = acos(a / b) # angle theta
d = atan2(Py - Cy, Px - Cx) # direction angle of point P from C
d1 = d + th # direction angle of point T1 from C
d2 = d - th # direction angle of point T2 from C
T1x = Cx + a * cos(d1)
T1y = Cy + a * sin(d1)
T2x = Cx + a * cos(d2)
T2y = Cy + a * sin(d2)
There are obvious ways to combine those calculations and make them a little more optimized, but I'll leave that to you. It is also possible to use the angle addition and subtraction formulas of trigonometry with a few other identities to completely remove the trig functions from the calculations. However, the result is more complicated and difficult to understand. Without testing I do not know which approach is more "optimized" but that depends on your purposes anyway. Let me know if you need this other approach, but the other answers here give you other approaches anyway.
Note that if a > b then acos(a / b) will throw an exception, but this means that point P is inside the circle and there is no tangency point. If a == b then point P is on the circle and there is only one tangency point, namely point P itself. My code is for the case a < b. I'll leave it to you to code the other cases and to decide the needed precision to decide if a and b are equal.
Here's another way using complex numbers.
If a is the direction (a complex number of length 1) of the tangent point on the circle from the centre c, and d is the (real) length along the tangent to get to p, then (because the direction of the tangent is I*a)
p = c + r*a + d*I*a
rearranging
(r+I*d)*a = p-c
But a has length 1 so taking the length we get
|r+I*d| = |p-c|
We know everything but d, so we can solve for d:
d = +- sqrt( |p-c|*|p-c| - r*r)
and then find the a's and the points on the circle, one of each for each value of d above:
a = (p-c)/(r+I*d)
q = c + r*a
Hmm not really an algorithm question (people tend to mistake algorithm and equation) If you want to write a code then do (you did not specify language nor what prevents you from doing this which is the reason of close votes)... Without this info your OP is just asking for math equation which is indeed off-topic here and by answering this I risk (right-full) down-votes too (but this is/was asked a lot here with much less info and 4 reopen votes against 1 close put my decision weight on reopen and answering this anyway).
You can exploit the fact that you are in 2D as in 2D perpendicular vectors to vector a(x,y) are computed like this:
c = (-y, x)
d = ( y,-x)
c = -d
so you swap x,y and negate one (which one determines if the perpendicular vector is CW or CCW). It is really a rotation formula but as we rotate by 90deg the cos,sin are just +1 and -1.
Now normal n to any circumference point on circle lies in the line going through that point and circles center. So putting all this together your tangents are:
// normal
nx = Px-Cx
ny = Py-Cy
// tangent 1
tx = -ny
ty = +nx
// tangent 2
tx = +ny
ty = -nx
If you want unit vectors than just divide by radius a (not sure why you do not call it r like the rest of the math world) so:
// normal
nx = (Px-Cx)/a
ny = (Py-Cy)/a
// tangent 1
tx = -ny
ty = +nx
// tangent 2
tx = +ny
ty = -nx
Let's go through derivation process:
As you can see, if the interior of the square is < 0 it's because the point is interior to the circumferemce. When the point is outside of the circumference there are two solutions, depending on the sign of the square.
The rest is easy. Take atan(solution) and be carefull here with the signs, you may better do some checks.
Use (2) and then undo (1) transformations and that's all.
c# implementation of dmuir's answer:
static void FindTangents(Vector2 point, Vector2 circle, float r, out Line l1, out Line l2)
{
var p = new Complex(point.x, point.y);
var c = new Complex(circle.x, circle.y);
var cp = p - c;
var d = Math.Sqrt(cp.Real * cp.Real + cp.Imaginary * cp.Imaginary - r * r);
var q = GetQ(r, cp, d, c);
var q2 = GetQ(r, cp, -d, c);
l1 = new Line(point, new Vector2((float) q.Real, (float) q.Imaginary));
l2 = new Line(point, new Vector2((float) q2.Real, (float) q2.Imaginary));
}
static Complex GetQ(float r, Complex cp, double d, Complex c)
{
return c + r * (cp / (r + Complex.ImaginaryOne * d));
}
Move the circle to the origin, rotate to bring the point on X and downscale by R to obtain a unit circle.
Now tangency is achieved when the origin (0, 0), the (reduced) given point (d, 0) and an arbitrary point on the unit circle (cos t, sin t) form a right triangle.
cos t (cos t - d) + sin t sin t = 1 - d cos t = 0
From this, you draw
cos t = 1 / d
and
sin t = ±√(1-1/d²).
To get the tangency points in the initial geometry, upscale, unrotate and untranslate. (These are simple linear algebra operations.) Notice that there is no need to perform the direct transform explicitly. All you need is d, ratio of the distance center-point over the radius.
The Question
Okay so basically what I'm trying to do, is calculating the Y location on a Cubic Curve / Bezier Curve / Spline when the X location is given.
I've searched everywhere on Stack Overflow and Google and I could find anything that actually worked!
The Curve Points
x1 = 50d;
y1 = 400d / 2d + 100d;
x2 = 400d;
y2 = 400d / 2d + 100d;
x3 = 600d - 400d;
y3 = 400d / 2d - 100d;
x4 = 600d - 50d;
y4 = 400d / 2d - 100d;
The reason that I calculate "600 - 400" and I don't just write "200" is because in my code "600" is actually the width of the window, which the Cubic Curve is rendered in. Thereby it actually says "width - 400" in my code.
So the following code after, can calculate the X & Y on a Cubic Curve when T is given!
t = 0.5d;
cx = 3d * (x2 - x1);
cy = 3d * (y2 - y1);
bx = 3d * (x3 - x2) - cx;
by = 3d * (y3 - y2) - cy;
ax = x4 - x1 - cx - bx;
ay = y4 - y1 - cy - by;
point_x = ax * (t * t * t) + bx * (t * t) + cx * t + x1;
point_y = ay * (t * t * t) + by * (t * t) + cy * t + y1;
So again, what I'm trying to calculate is the Y location of a Curve when you know an X location. But the only thing that I'm able to calculate is both an X & Y location on the Curve when T is given.
This is my first post, so if something isn't written 100% correctly, I apologize!
A cubic curve can have multiple 'y' values for an 'x' value, so you're going to have to perform root finding, after rotating the cubic curve so that it's aligned with the x/y axes. http://pomax.github.io/bezierinfo/#intersections covers this concept, but the idea is this:
take your curve, defined by points {p1,p2,p3,p4}, and take your line at x=X, defined by points {p5,p6} (where p5 is some coordinate (x,...) and p6 is another coordinate(x,...). The important part is that it's a vertical line, so both points have the same x value).
translate and rotate your cubic curve and your line, together, so that your line becomes horizontal, at new height y = 0.
you can now perform cubic root finding on the curve's y function. This may generate 0, 1, 2, or 3 distinct t values.
for each of these t values: in your normal, unrotated curve, compute the (x,y) coordinate given that t value. You will now have all the y values for a given x (all points will have the same x value, too).
In the article linked, have a look at the source for the cubic curve/line example, and to see how the rotation/rootfinding works, see the BezierCurve align(Point start, Point end) {... and float[] findAllRoots(int derivative, float[] values) {... functions. (especially note that after rotation, we're only finding the roots for the y-function! the x-function has become irrelevant for what we want to do)
If your formulas are correct, theoretically one might use Formula for cubic function roots to express t dependency from x. Then from x you find t, from t you find y.
I am trying to find the angle of the outer line of the object in the green region of the image as shown in the image above…
For that, I have scanned the green region and get the points (dark blue points as shown in the image)...
As you can see, the points are not making straight line so I can’t find angle easily.
So I think I have to find a middle way and
that is to find the line so that the distance between each point and line remain as minimum as possible.
So how can I find the line so that each point exposes minimum distance to it……?
Is there any algorithm for this or is there any good way other than this?
The obvious route would be to do a least-squares linear regression through the points.
The standard least squares regression formulae for x on y or y on x assume there is no error in one coordinate and minimize the deviations in the coordinate from the line.
However, it is perfectly possible to set up a least squares calculation such that the value minimized is the sum of squares of the perpendicular distances of the points from the lines. I'm not sure whether I can locate the notebooks where I did the mathematics - it was over twenty years ago - but I did find the code I wrote at the time to implement the algorithm.
With:
n = ∑ 1
sx = ∑ x
sx2 = ∑ x2
sy = ∑ y
sy2 = ∑ y2
sxy = ∑ x·y
You can calculate the variances of x and y and the covariance:
vx = sx2 - ((sx * sx) / n)
vy = sy2 - ((sy * sy) / n)
vxy = sxy - ((sx * sy) / n)
Now, if the covariance is 0, then there is no semblance of a line. Otherwise, the slope and intercept can be found from:
slope = quad((vx - vy) / vxy, vxy)
intcpt = (sy - slope * sx) / n
Where quad() is a function that calculates the root of quadratic equation x2 + b·x - 1 with the same sign as c. In C, that would be:
double quad(double b, double c)
{
double b1;
double q;
b1 = sqrt(b * b + 4.0);
if (c < 0.0)
q = -(b1 + b) / 2;
else
q = (b1 - b) / 2;
return (q);
}
From there, you can find the angle of your line easily enough.
Obviously the line will pass through averaged point (x_average,y_average).
For direction you may use the following algorithm (derived directly from minimizing average square distance between line and points):
dx[i]=x[i]-x_average;
dy[i]=y[i]-y_average;
a=sum(dx[i]^2-dy[i]^2);
b=sum(2*dx[i]*dy[i]);
direction=atan2(b,a);
Usual linear regression will not work here, because it assumes that variables are not symmetric - one depends on other, so if you will swap x and y, you will have another solution.
The hough transform might be also a good option:
http://en.wikipedia.org/wiki/Hough_transform
You might try searching for "total least squares", or "least orthogonal distance" but when I tried that I saw nothing immediately applicable.
Anyway suppose you have points x[],y[], and the line is represented by a*x+b*y+c = 0, where hypot(a,b) = 1. The least orthogonal distance line is the one that minimises Sum{ (a*x[i]+b*y[i]+c)^2}. Some algebra shows that:
c is -(a*X+b*Y) where X is the mean of the x's and Y the mean of the y's.
(a,b) is the eigenvector of C corresponding to it's smaller eigenvalue, where C is the covariance matrix of the x's and y's
I am trying to determine whether a line segment (i.e. between two points) intersects a sphere. I am not interested in the position of the intersection, just whether or not the segment intersects the sphere surface. Does anyone have any suggestions as to what the most efficient algorithm for this would be? (I'm wondering if there are any algorithms that are simpler than the usual ray-sphere intersection algorithms, since I'm not interested in the intersection position)
If you are only interested if knowing if it intersects or not then your basic algorithm will look like this...
Consider you have the vector of your ray line, A -> B.
You know that the shortest distance between this vector and the centre of the sphere occurs at the intersection of your ray vector and a vector which is at 90 degrees to this which passes through the centre of the sphere.
You hence have two vectors, the equations of which fully completely defined. You can work out the intersection point of the vectors using linear algebra, and hence the length of the line (or more efficiently the square of the length of the line) and test if this is less than the radius (or the square of the radius) of your sphere.
I don't know what the standard way of doing it is, but if you only want to know IF it intersects, here is what I would do.
General rule ... avoid doing sqrt() or other costly operations. When possible, deal with the square of the radius.
Determine if the starting point is inside the radius of the sphere. If you know that this is never the case, then skip this step. If you are inside, your ray will intersect the sphere.
From here on, your starting point is outside the sphere.
Now, imagine the small box that will fit sphere. If you are outside that box, check the x-direction, y-direction and z-direction of the ray to see if it will intersect the side of the box that your ray starts at. This should be a simple sign check, or comparison against zero. If you are outside the and moving away from it, you will never intersect it.
From here on, you are in the more complicated phase. Your starting point is between the imaginary box and the sphere. You can get a simplified expression using calculus and geometry.
The gist of what you want to do is determine if the shortest distance between your ray and the sphere is less than radius of the sphere.
Let your ray be represented by (x0 + it, y0 + jt, z0 + kt), and the centre of your sphere be at (xS, yS, zS). So, we want to find t such that it would give the shortest of (xS - x0 - it, yS - y0 - jt, zS - z0 - kt).
Let x = xS - x0, y = yX - y0, z = zS - z0, D = magnitude of the vector squared
D = x^2 -2*xit + (i*t)^2 + y^2 - 2*yjt + (j*t)^2 + z^2 - 2*zkt + (k*t)^2
D = (i^2 + j^2 + k^2)t^2 - (xi + yj + zk)*2*t + (x^2 + y^2 + z^2)
dD/dt = 0 = 2*t*(i^2 + j^2 + k^2) - 2*(xi + yj + z*k)
t = (xi + yj + z*k) / (i^2 + j^2 + k^2)
Plug t back into the equation for D = .... If the result is less than or equal the square of the sphere's radius, you have an intersection. If it is greater, then there is no intersection.
This page has an exact solution for this problem. Essentially, you are substituting the equation for the line into the equation for the sphere, then computes the discriminant of the resulting quadratic. The values of the discriminant indicate intersection.
Are you still looking for an answer 13 years later? Here is a complete and simple solution
Assume the following:
the line segment is defined by endpoints as 3D vectors v1 and v2
the sphere is centered at vc with radius r
Ne define the three side lengths of a triangle ABC as:
A = v1-vc
B = v2-vc
C = v1-v2
If |A| < r or |B| < r, then we're done; the line segment intersects the sphere
After doing the check above, if the angle between A and B is acute, then we're done; the line segment does not intersect the sphere.
If neither of these conditions are met, then the line segment may or may not intersect the sphere. To find out, we just need to find H, which is the height of the triangle ABC taking C as the base. First we need φ, the angle between A and C:
φ = arccos( dot(A,C) / (|A||C|) )
and then solve for H:
sin(φ) = H/|A|
===> H = |A|sin(φ) = |A| sqrt(1 - (dot(A,C) / (|A||C|))^2)
and we are done. The result is
if H < r, then the line segment intersects the sphere
if H = r, then the line segment is tangent to the sphere
if H > r, then the line segment does not intersect the sphere
Here that is in Python:
import numpy as np
def unit_projection(v1, v2):
'''takes the dot product between v1, v2 after normalization'''
u1 = v1 / np.linalg.norm(v1)
u2 = v2 / np.linalg.norm(v2)
return np.dot(u1, u2)
def angle_between(v1, v2):
'''computes the angle between vectors v1 and v2'''
return np.arccos(np.clip(unit_projection(v1, v2), -1, 1))
def check_intersects_sphere(xa, ya, za, xb, yb, zb, xc, yc, zc, radius):
'''checks if a line segment intersects a sphere'''
v1 = np.array([xa, ya, za])
v2 = np.array([xb, yb, zb])
vc = np.array([xc, yc, zc])
A = v1 - vc
B = v2 - vc
C = v1 - v2
if(np.linalg.norm(A) < radius or np.linalg.norm(B) < radius):
return True
if(angle_between(A, B) < np.pi/2):
return False
H = np.linalg.norm(A) * np.sqrt(1 - unit_projection(A, C)**2)
if(H < radius):
return True
if(H >= radius):
return False
Note that I have written this so that it returns False when either endpoint is on the surface of the sphere, or when the line segment is tangent to the sphere, because it serves my purposes better.
This might be essentially what user Cruachan suggested. A comment there suggests that other answers are "too elaborate". There might be a more elegant way to implement this that uses more compact linear algebra operations and identities, but I suspect that the amount of actual compute required boils down to something like this. If someone sees somewhere to save some effort please do let us know.
Here is a test of the code. The figure below shows several trial line segments originating from a position (-1, 1, 1) , with a unit sphere at (1,1,1). Blue line segments have intersected, red have not.
And here is another figure which verifies that line segments that stop just short of the sphere's surface do not intersect, even if the infinite ray that they belong to does:
Here is the code that generates the image:
import matplotlib.pyplot as plt
radius = 1
xc, yc, zc = 1, 1, 1
xa, ya, za = xc-2, yc, zc
nx, ny, nz = 4, 4, 4
xx = np.linspace(xc-2, xc+2, nx)
yy = np.linspace(yc-2, yc+2, ny)
zz = np.linspace(zc-2, zc+2, nz)
n = nx * ny * nz
XX, YY, ZZ = np.meshgrid(xx, yy, zz)
xb, yb, zb = np.ravel(XX), np.ravel(YY), np.ravel(ZZ)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i in range(n):
if(xb[i] == xa): continue
intersects = check_intersects_sphere(xa, ya, za, xb[i], yb[i], zb[i], xc, yc, zc, radius)
color = ['r', 'b'][int(intersects)]
s = [0.3, 0.7][int(intersects)]
ax.plot([xa, xb[i]], [ya, yb[i]], [za, zb[i]], '-o', color=color, ms=s, lw=s, alpha=s/0.7)
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
x = np.outer(np.cos(u), np.sin(v)) + xc
y = np.outer(np.sin(u), np.sin(v)) + yc
z = np.outer(np.ones(np.size(u)), np.cos(v)) + zc
ax.plot_surface(x, y, z, rstride=4, cstride=4, color='k', linewidth=0, alpha=0.25, zorder=0)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.tight_layout()
plt.show()
you sorta have to work that the position anyway if you want accuracy. The only way to improve speed algorithmically is to switch from ray-sphere intersection to ray-bounding-box intersection.
Or you could go deeper and try and improve sqrt and other inner function calls
http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
What's the algorithm for computing a least squares plane in (x, y, z) space, given a set of 3D data points? In other words, if I had a bunch of points like (1, 2, 3), (4, 5, 6), (7, 8, 9), etc., how would one go about calculating the best fit plane f(x, y) = ax + by + c? What's the algorithm for getting a, b, and c out of a set of 3D points?
If you have n data points (x[i], y[i], z[i]), compute the 3x3 symmetric matrix A whose entries are:
sum_i x[i]*x[i], sum_i x[i]*y[i], sum_i x[i]
sum_i x[i]*y[i], sum_i y[i]*y[i], sum_i y[i]
sum_i x[i], sum_i y[i], n
Also compute the 3 element vector b:
{sum_i x[i]*z[i], sum_i y[i]*z[i], sum_i z[i]}
Then solve Ax = b for the given A and b. The three components of the solution vector are the coefficients to the least-square fit plane {a,b,c}.
Note that this is the "ordinary least squares" fit, which is appropriate only when z is expected to be a linear function of x and y. If you are looking more generally for a "best fit plane" in 3-space, you may want to learn about "geometric" least squares.
Note also that this will fail if your points are in a line, as your example points are.
The equation for a plane is: ax + by + c = z. So set up matrices like this with all your data:
x_0 y_0 1
A = x_1 y_1 1
...
x_n y_n 1
And
a
x = b
c
And
z_0
B = z_1
...
z_n
In other words: Ax = B. Now solve for x which are your coefficients. But since (I assume) you have more than 3 points, the system is over-determined so you need to use the left pseudo inverse. So the answer is:
a
b = (A^T A)^-1 A^T B
c
And here is some simple Python code with an example:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
N_POINTS = 10
TARGET_X_SLOPE = 2
TARGET_y_SLOPE = 3
TARGET_OFFSET = 5
EXTENTS = 5
NOISE = 5
# create random data
xs = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
ys = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
zs = []
for i in range(N_POINTS):
zs.append(xs[i]*TARGET_X_SLOPE + \
ys[i]*TARGET_y_SLOPE + \
TARGET_OFFSET + np.random.normal(scale=NOISE))
# plot raw data
plt.figure()
ax = plt.subplot(111, projection='3d')
ax.scatter(xs, ys, zs, color='b')
# do fit
tmp_A = []
tmp_b = []
for i in range(len(xs)):
tmp_A.append([xs[i], ys[i], 1])
tmp_b.append(zs[i])
b = np.matrix(tmp_b).T
A = np.matrix(tmp_A)
fit = (A.T * A).I * A.T * b
errors = b - A * fit
residual = np.linalg.norm(errors)
print("solution:")
print("%f x + %f y + %f = z" % (fit[0], fit[1], fit[2]))
print("errors:")
print(errors)
print("residual:")
print(residual)
# plot plane
xlim = ax.get_xlim()
ylim = ax.get_ylim()
X,Y = np.meshgrid(np.arange(xlim[0], xlim[1]),
np.arange(ylim[0], ylim[1]))
Z = np.zeros(X.shape)
for r in range(X.shape[0]):
for c in range(X.shape[1]):
Z[r,c] = fit[0] * X[r,c] + fit[1] * Y[r,c] + fit[2]
ax.plot_wireframe(X,Y,Z, color='k')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
unless someone tells me how to type equations here, let me just write down the final computations you have to do:
first, given points r_i \n \R, i=1..N, calculate the center of mass of all points:
r_G = \frac{\sum_{i=1}^N r_i}{N}
then, calculate the normal vector n, that together with the base vector r_G defines the plane by calculating the 3x3 matrix A as
A = \sum_{i=1}^N (r_i - r_G)(r_i - r_G)^T
with this matrix, the normal vector n is now given by the eigenvector of A corresponding to the minimal eigenvalue of A.
To find out about the eigenvector/eigenvalue pairs, use any linear algebra library of your choice.
This solution is based on the Rayleight-Ritz Theorem for the Hermitian matrix A.
See 'Least Squares Fitting of Data' by David Eberly for how I came up with this one to minimize the geometric fit (orthogonal distance from points to the plane).
bool Geom_utils::Fit_plane_direct(const arma::mat& pts_in, Plane& plane_out)
{
bool success(false);
int K(pts_in.n_cols);
if(pts_in.n_rows == 3 && K > 2) // check for bad sizing and indeterminate case
{
plane_out._p_3 = (1.0/static_cast<double>(K))*arma::sum(pts_in,1);
arma::mat A(pts_in);
A.each_col() -= plane_out._p_3; //[x1-p, x2-p, ..., xk-p]
arma::mat33 M(A*A.t());
arma::vec3 D;
arma::mat33 V;
if(arma::eig_sym(D,V,M))
{
// diagonalization succeeded
plane_out._n_3 = V.col(0); // in ascending order by default
if(plane_out._n_3(2) < 0)
{
plane_out._n_3 = -plane_out._n_3; // upward pointing
}
success = true;
}
}
return success;
}
Timed at 37 micro seconds fitting a plane to 1000 points (Windows 7, i7, 32bit program)
This reduces to the Total Least Squares problem, that can be solved using SVD decomposition.
C++ code using OpenCV:
float fitPlaneToSetOfPoints(const std::vector<cv::Point3f> &pts, cv::Point3f &p0, cv::Vec3f &nml) {
const int SCALAR_TYPE = CV_32F;
typedef float ScalarType;
// Calculate centroid
p0 = cv::Point3f(0,0,0);
for (int i = 0; i < pts.size(); ++i)
p0 = p0 + conv<cv::Vec3f>(pts[i]);
p0 *= 1.0/pts.size();
// Compose data matrix subtracting the centroid from each point
cv::Mat Q(pts.size(), 3, SCALAR_TYPE);
for (int i = 0; i < pts.size(); ++i) {
Q.at<ScalarType>(i,0) = pts[i].x - p0.x;
Q.at<ScalarType>(i,1) = pts[i].y - p0.y;
Q.at<ScalarType>(i,2) = pts[i].z - p0.z;
}
// Compute SVD decomposition and the Total Least Squares solution, which is the eigenvector corresponding to the least eigenvalue
cv::SVD svd(Q, cv::SVD::MODIFY_A|cv::SVD::FULL_UV);
nml = svd.vt.row(2);
// Calculate the actual RMS error
float err = 0;
for (int i = 0; i < pts.size(); ++i)
err += powf(nml.dot(pts[i] - p0), 2);
err = sqrtf(err / pts.size());
return err;
}
As with any least-squares approach, you proceed like this:
Before you start coding
Write down an equation for a plane in some parameterization, say 0 = ax + by + z + d in thee parameters (a, b, d).
Find an expression D(\vec{v};a, b, d) for the distance from an arbitrary point \vec{v}.
Write down the sum S = \sigma_i=0,n D^2(\vec{x}_i), and simplify until it is expressed in terms of simple sums of the components of v like \sigma v_x, \sigma v_y^2, \sigma v_x*v_z ...
Write down the per parameter minimization expressions dS/dx_0 = 0, dS/dy_0 = 0 ... which gives you a set of three equations in three parameters and the sums from the previous step.
Solve this set of equations for the parameters.
(or for simple cases, just look up the form). Using a symbolic algebra package (like Mathematica) could make you life much easier.
The coding
Write code to form the needed sums and find the parameters from the last set above.
Alternatives
Note that if you actually had only three points, you'd be better just finding the plane that goes through them.
Also, if the analytic solution in unfeasible (not the case for a plane, but possible in general) you can do steps 1 and 2, and use a Monte Carlo minimizer on the sum in step 3.
CGAL::linear_least_squares_fitting_3
Function linear_least_squares_fitting_3 computes the best fitting 3D
line or plane (in the least squares sense) of a set of 3D objects such
as points, segments, triangles, spheres, balls, cuboids or tetrahedra.
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Principal_component_analysis_ref/Function_linear_least_squares_fitting_3.html
It sounds like all you want to do is linear regression with 2 regressors. The wikipedia page on the subject should tell you all you need to know and then some.
All you'll have to do is to solve the system of equations.
If those are your points:
(1, 2, 3), (4, 5, 6), (7, 8, 9)
That gives you the equations:
3=a*1 + b*2 + c
6=a*4 + b*5 + c
9=a*7 + b*8 + c
So your question actually should be: How do I solve a system of equations?
Therefore I recommend reading this SO question.
If I've misunderstood your question let us know.
EDIT:
Ignore my answer as you probably meant something else.
We first present a linear least-squares plane fitting method that minimizes the residuals between the estimated normal vector and provided points.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver in C++ as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
We then discuss its equivalence to the typical SVD-based method and their comparison.
The aforementioned linear least-squares (LLS) method fits the general plane equation Ax + By + Cz + D = 0, whereas the SVD-based method replaces D with D = - (Ax0 + By0 + Cz0) and fits the plane equation A(x-x0) + B(y-y0) + C(z-z0) = 0, where (x0, y0, z0) is the mean of all points that serves as the origin of the new local coordinate frame.
Comparison between two methods:
The LLS fitting method is much faster than the SVD-based method, and is suitable for use when points are known to be roughly in a plane shape.
The SVD-based method is more numerically stable when the plane is far away from origin, because the LLS method would require more digits after decimal to be stored and processed in such cases.
The LLS method can detect outliers by checking the dot product residual between each point and the estimated normal vector, whereas the SVD-based method can detect outliers by checking if the smallest eigenvalue of the covariance matrix is significantly smaller than the two larger eigenvalues (i.e. checking the shape of the covariance matrix).
We finally provide a test case in C++ and MATLAB.
// Test case in C++ (using LLS fitting method)
matA(0,0) = 5.4637; matA(0,1) = 10.3354; matA(0,2) = 2.7203;
matA(1,0) = 5.8038; matA(1,1) = 10.2393; matA(1,2) = 2.7354;
matA(2,0) = 5.8565; matA(2,1) = 10.2520; matA(2,2) = 2.3138;
matA(3,0) = 6.0405; matA(3,1) = 10.1836; matA(3,2) = 2.3218;
matA(4,0) = 5.5537; matA(4,1) = 10.3349; matA(4,2) = 1.8796;
// With this sample data, LLS fitting method can produce the following result
// fitted normal vector = (-0.0231143, -0.0838307, -0.00266429)
// unit normal vector = (-0.265682, -0.963574, -0.0306241)
// D = 11.4943
% Test case in MATLAB (using SVD-based method)
points = [5.4637 10.3354 2.7203;
5.8038 10.2393 2.7354;
5.8565 10.2520 2.3138;
6.0405 10.1836 2.3218;
5.5537 10.3349 1.8796]
covariance = cov(points)
[V, D] = eig(covariance)
normal = V(:, 1) % pick the eigenvector that corresponds to the smallest eigenvalue
% normal = (0.2655, 0.9636, 0.0306)