I am trying to reproduce the degree-3 or degree-4 3D curves typically found in parametric cad programs like Rhino or Autocad, which take any number of 3D points to create long curves. I've found that three.js has Cubic (degree-3) and Quadratic (degree-4) Bezier curves available but they take exactly three and 4 vectors, respectively. I'd like to create curves with 10 or more inputs, not just 3 or 4. I've also found that three.js has 'Path' which allows building a 2D curve of mixed degree segments using the .bezierCurveTo() or .quadraticCurveTo() methods.
So my question:
Is there currently a way to construct long chains of CubicBezierCurve3 curves that join smoothly? Ideally with a constructor that takes a simple array of vertices?
If I need to implement this myself, where is the best place to start? I'm thinking the .quadraticCurveTo() method could be extended to use a z component and added to SplineCurve3? I'm not 100% clear on how the array of curves works in the 'Path' object.
THREE.Path.prototype.quadraticCurveTo = function( aCPx, aCPy, aX, aY ) {
var args = Array.prototype.slice.call( arguments );
var lastargs = this.actions[ this.actions.length - 1 ].args;
var x0 = lastargs[ lastargs.length - 2 ];
var y0 = lastargs[ lastargs.length - 1 ];
var curve = new THREE.QuadraticBezierCurve( new THREE.Vector2( x0, y0 ),
new THREE.Vector2( aCPx, aCPy ),
new THREE.Vector2( aX, aY ) );
this.curves.push( curve );
this.actions.push( { action: THREE.PathActions.QUADRATIC_CURVE_TO, args: args } );
};
Thanks for your help!
Thanks to karatedog and fang for your in-depth answers. In searching for more information about B-spline curve, I stumbled upon this extra library for Three.js NURBS which is exactly what I needed. Upon closer inspection of the THREE.NURBSCurve() constructor in this library, it's implemented exactly as fang described: with arrays of both control points and knots. Knots are defined similarly to the method described above. I'm Marking Fang's answer as correct but I wanted to add this link to the pre-existing library as well, so any n00bs like myself could use it :)
If you are fine with using a high degree Bezier curve, then you can implement it using De Casteljau algorithm. The link in karatedog's answer provides a good source for this algorithm. If you want to stick with degree 3 polynomial curve with many control points, B-spline curve will be a good choice. B-spline curve can be implemented using Cox de Boor algorithm. You can find plenty of reference on the internet. B-spline curve definition requires degree, control points and knot vector. If you want your function to simply take an array of 3d points, you can set degree = 3 and internally define the knot vector as
[0, 0, 0, 0, 1/(N-3), 2/(N-3),....., 1, 1, 1, 1].
where N = number of control points.
For example,
N=4, knot vector=[0, 0, 0, 0, 1, 1, 1, 1],
N=5, knot vector=[0, 0, 0, 0, 1/2, 1, 1, 1, 1],
N=6, knot vector=[0, 0, 0, 0, 1/3, 2/3, 1, 1, 1, 1].
For the N=4 case, the B-spline curve is essentially the same as a cubic Bezier curve.
I suggest to implement your own calculation algorithm, it is fairly easy, the learning process is short and worth the time invested. Check this page: http://pomax.github.io/bezierinfo/
It describes a method (language agnostic) that you can calculate Beziér curves with any number of control points, although the a calculation that is specific to a certain number of control points (like cubic or quadratic) can be highly optimized.
Related
For example, in a 2D space, with x [0 ; 1] and y [0 ; 1]. For p = 4, intuitively, I will place each point at each corner of the square.
But what can be the general algorithm?
Edit: The algorithm needs modification if dimensions are not orthogonal to eachother
To uniformly place the points as described in your example you could do something like this:
var combinedSize = 0
for each dimension d in d0..dn {
combinedSize += d.length;
}
val listOfDistancesBetweenPointsAlongEachDimension = new List
for each d dimension d0..dn {
val percentageOfWholeDimensionSize = d.length/combinedSize
val pointsToPlaceAlongThisDimension = percentageOfWholeDimensionSize * numberOfPoints
listOfDistancesBetweenPointsAlongEachDimension[d.index] = d.length/(pointsToPlaceAlongThisDimension - 1)
}
Run on your example it gives:
combinedSize = 2
percentageOfWholeDimensionSize = 1 / 2
pointsToPlaceAlongThisDimension = 0.5 * 4
listOfDistancesBetweenPointsAlongEachDimension[0] = 1 / (2 - 1)
listOfDistancesBetweenPointsAlongEachDimension[1] = 1 / (2 - 1)
note: The minus 1 deals with the inclusive interval, allowing points at both endpoints of the dimension
2D case
In 2D (n=2) the solution is to place your p points evenly on some circle. If you want also to define the distance d between points then the circle should have radius around:
2*Pi*r = ~p*d
r = ~(p*d)/(2*Pi)
To be more precise you should use circumference of regular p-point polygon instead of circle circumference (I am too lazy to do that). Or you can compute the distance of produced points and scale up/down as needed instead.
So each point p(i) can be defined as:
p(i).x = r*cos((i*2.0*Pi)/p)
p(i).y = r*sin((i*2.0*Pi)/p)
3D case
Just use sphere instead of circle.
ND case
Use ND hypersphere instead of circle.
So your question boils down to place p "equidistant" points to a n-D hypersphere (either surface or volume). As you can see 2D case is simple, but in 3D this starts to be a problem. See:
Make a sphere with equidistant vertices
sphere subdivision triangulation
As you can see there are quite a few approaches to do this (there are much more of them even using Fibonacci sequence generated spiral) which are more or less hard to grasp or implement.
However If you want to generalize this into ND space you need to chose general approach. I would try to do something like this:
Place p uniformly distributed place inside bounding hypersphere
each point should have position,velocity and acceleration vectors. You can also place the points randomly (just ensure none are at the same position)...
For each p compute acceleration
each p should retract any other point (opposite of gravity).
update position
just do a Newton D'Alembert physics simulation in ND. Do not forget to include some dampening of speed so the simulation will stop in time. Bound the position and speed to the sphere so points will not cross it's border nor they would reflect the speed inwards.
loop #2 until max speed of any p crosses some threshold
This will more or less accurately place p points on the circumference of ND hypersphere. So you got minimal distance d between them. If you got some special dependency between n and p then there might be better configurations then this but for arbitrary numbers I think this approach should be safe enough.
Now by modifying #2 rules you can achieve 2 different outcomes. One filling hypersphere surface (by placing massive negative mass into center of surface) and second filling its volume. For these two options also the radius will be different. For one you need to use surface and for the other volume...
Here example of similar simulation used to solve a geometry problem:
How to implement a constraint solver for 2-D geometry?
Here preview of 3D surface case:
The number on top is the max abs speed of particles used to determine the simulations stopped and the white-ish lines are speed vectors. You need to carefully select the acceleration and dampening coefficients so the simulation is fast ...
I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github
Here is the deal. I have multiple points (X,Y) that form an 'ellipse like' shape.
I would like to evaluate/fit the 'best' ellipse possible and get its properties (a,b,F1,F2), or just the center of the ellipse.
Any ideas/leads would be appreciated.
Gilad.
There's a Matlab function fit_ellipse that can do the job. There's also this paper on methods for orthogonal distance fitting of ellipses. A web search for orthogonal ellipse fit will probably turn up a lot of other resources as well.
The ellipse fitting method proposed by:
Z. L. Szpak, W. Chojnacki, and A. van den Hengel.
Guaranteed ellipse fitting with a confidence region and an uncertainty measure for centre, axes, and orientation.
J. Math. Imaging Vision, 2015.
may be of interest to you. They provide estimates of both algebraic and geometric ellipse
parameters, together with covariance matrices that express the uncertainty of the parameter estimates.
They also provide a means of computing a planar 95% confidence region associated with the estimate
that allows one to visualise the uncertainty in the ellipse fit.
A pre-print version of the paper is available on the authors websites (http://cs.adelaide.edu.au/~wojtek/publicationsWC.html).
A MATLAB implementation of the method is also available for download:
https://sites.google.com/site/szpakz/source-code/guaranteed-ellipse-fitting-with-a-confidence-region-and-an-uncertainty-measure-for-centre-axes-and-orientation
I will explain how I would approach the problem. I would suggest a hill climbing approach. First compute the gravity center of the points as a start point and choose two values for a and b in some way(probably arbitrary positive values will do). You need to have a fit function and I would suggest it to return the number of points (close enough to)lying on a given ellipse:
int fit(x, y, a, b)
int res := 0
for point in points
if point_almost_on_ellipse(x, y, a, b, point)
res = res + 1
end_if
end_for
return res
Now start with some step. I would choose a big enough value to be sure the best center of the elipse will never be more then step away from the first point. Choosing such a big value is not necessary, but the slowest part of the algorithm is the time it takes to get close to the best center so bigger value is better, I think.
So now we have some initial point(x, y), some initial values of a and b and an initial step. The algorithm iteratively chooses the best of the neighbours of the current point if there is any neighbour better then it, or decrease step twice otherwise. Here by 'best' I mean using the fit function. And also a position is defined by four values (x, y, a, b) and it's neighbours are 8: (x+-step, y, a, b),(x, y+-step, a, b), (x, y, a+-step, b), (x, y, a, b+-step)(if results are not good enough you can add more neighbours by also going by diagonal - for instance (x+-step, y+-step, a, b) and so on). Here is how you do that
neighbours = [[-1, 0, 0, 0], [1, 0, 0, 0], [0, -1, 0, 0], [0, 1, 0, 0],
[0, 0, -1, 0], [0, 0, 1, 0], [0, 0, 0, -1], [0, 0, 0, 1]]
iterate (cx, cy, ca, cb, step)
current_fit = fit(cx, cy, ca, cb)
best_neighbour = []
best_fit = current_fit
for neighbour in neighbours
tx = cx + neighbour[0]*step
ty = cx + neighbour[1]*step
ta = ca + neighbour[2]*step
tb = cb + neighbour[3]*step
tfit = fit(tx, ty, ta, tb)
if (tfit > best_fit)
best_fit = tfit
best_neighbour = [tx,ty,ta,tb]
endif
end_for
if best_neighbour.size == 4
cx := best_neighbour[0]
cy := best_neighbour[1]
ca := best_neighbour[2]
cb := best_neighbour[3]
else
step = step * 0.5
end_if
And you continue iterating until the value of step is smaller then a given threshold(for instance 1e-6). I have written everything in pseudo code as I am not sure which language do you want to use.
It is not guaranteed that the answer found this way will be optimal but I am pretty sure it will be good enough approximation.
Here is an article about hill climbing.
I think that Wild Magic library contains a function for ellipse fitting. There is article with method decription
The problem is to define "best". What is best in your case? The ellipse with the smallest area which contains n% of pointS?
If you define "best" in terms of probability, you can simply use the covariance matrix of your points, and compute the error ellipse.
An error ellipse for this "multivariate Gaussian distribution" would then contain the points corresponding to whatever confidence interval you decide.
Many computing packages can compute the covariance, with its corresponding eigenvalues and eigenvectors. The angle of the ellipse is the angle between the x axis and the eigenvector corresponding to the largest eigenvalue. The semi-axes are the reciprocal of the eigenvalues.
If your routine returns everything normalized (which it should), then you can decide by what factor to multiply everything to obtain an alpha-confidence interval.
I know that transforming a square into a trapezoid is a linear transformation, and can be done using the projective matrix, but I'm having a little trouble figuring out how to construct the matrix.
Using the projective matrix to translate, scale, rotates, and shear is straightforward. Is there a simple projective matrix which will transform a square to a trapezoid?
a,b,c,d are the four corners of your 2D square.
a,b,c,d are expressed in homogeneous coordinate and so they are 3x1 matrices.
alpha, beta, gamma, delta are the four corners of your 2D trapezoid.
alpha, beta, gamma, delta are expressed in homogeneous coordinate and so they are 3x1 matrices.
H is the 3x3 matrix you are looking for, it is also called an homography
h1 h2 h3
H = h4 h5 h6
h7 h8 h9
H maps a,b,c,d into alpha, beta, gamma, delta and so you have the following four equations
alpha=H*a
beta=H*b
gamma=H*c
delta=H*d
Assuming you know a,b,c,d and alpha, beta, gamma, delta you can solve the previous four equation system for the nine unknowns h1, h2, h3, h4, h5, h6, h7, h8, h9.
Here I have just described a "raw" solution to the problem, that, in principle, can work; for a detailed explanation of the above mentioned method you can see for example this page http://www.corrmap.com/features/homography_transformation.php where they put h9=1 (since H can be expressed just with 8 parameters) and then solve a linear system of eight equations in eight unknowns. You can find a similar explanation in section 2 of the thesis Homography Estimation by Elan Dubrofsky.
Another explanation is Using Projective Geometry to Correct a Camera by David Austin in the 2013 March issue of Feature Column from the AMS.
The above mentioned method, with its disadvantages, is described in chapter 4 "Estimation - 2D Projective Transformation" in the second edition of Multiple view geometry in computer vision by Richard Hartley and Andrew Zissermann where they also describe different and better algorithms; you can check this link http://www.cse.iitd.ac.in/~suban/vision/geometry/node24.html which seems to follow the same book.
You can find another explanation of the homography in section 15.1.4, "Projective transformation model" of the book Computer Vision: Models, Learning, and Inference by Simon J.D. Prince. The algorithm Algorithm 15.4: maximum likelihood learning of projective transformation (homography) is outlined in his Algorithms booklet: the problem is solved by means of a non-linear minimization.
Maybe you could use a quadrilateral? See my answer here:
https://stackoverflow.com/a/12820877/202451
Then you will have full control over each point and can easily make any four-cornered shape. :)
Java implementation with minimal dependencies
For those with limited knowledge and time looking for a quick and dirty solution there is a working and quite reliable Java implementation in the Wii-interact project.
The transormation is in the The Homography source file. It boils down to constructing and solving the matrix:
/**
* Please note that Dr. John Zelle assisted us in developing the code to
* handle the matrices involved in solving for the homography mapping.
*
**/
Matrix A = new Matrix(new double[][]{
{x1, y1, 1, 0, 0, 0, -xp1*x1, -xp1*y1},
{0, 0, 0, x1, y1, 1, -yp1*x1, -yp1*y1},
{x2, y2, 1, 0, 0, 0, -xp2*x2, -xp2*y2},
{0, 0, 0, x2, y2, 1, -yp2*x2, -yp2*y2},
{x3, y3, 1, 0, 0, 0, -xp3*x3, -xp3*y3},
{0, 0, 0, x3, y3, 1, -yp3*x3, -yp3*y3},
{x4, y4, 1, 0, 0, 0, -xp4*x4, -xp4*y4},
{0, 0, 0, x4, y4, 1, -yp4*x4, -yp4*y4}
});
Matrix XP = new Matrix(new double[][]
{{xp1}, {yp1}, {xp2}, {yp2}, {xp3}, {yp3}, {xp4}, {yp4}});
Matrix P = A.solve(XP);
transformation = new Matrix(new double[][]{
{P.get(0, 0), P.get(1, 0), P.get(2,0)},
{P.get(3, 0), P.get(4, 0), P.get(5,0)},
{P.get(6, 0), P.get(7, 0), 1}
});
Usage: the following method does the final transformation:
public Point2D.Double transform(Point2D.Double point) {
Matrix p = new Matrix(new double[][]{{point.getX()}, {point.getY()}, {1}});
Matrix result = transformation.times(p);
double z = result.get(2, 0);
return new Point2D.Double(result.get(0, 0) / z, result.get(1, 0) / z);
}
The Matrix class dependency comes from JAMA: Java Matrix Package
License
Wii-interact GNU GPL v3
JAMA public domain
I wish to determine when a point (mouse position) in on, or near a curve defined by a series of B-Spline control points.
The information I will have for the B-Spline is the list of n control points (in x,y coordinates). The list of control points can be of any length (>= 4) and define a B-spline consisting of (n−1)/3 cubic Bezier curves. The Bezier curves are are all cubic. I wish to set a parameter k,(in pixels) of the distance defined to be "near" the curve. If the mouse position is within k pixels of the curve then I need to return true, otherwise false.
Is there an algorithm that gives me this information. Any solution does not need to be precise - I am working to a tolerance of 1 pixel (or coordinate).
I have found the following questions seem to offer some help, but do not answer my exact question. In particular the first reference seems to be a solution only for 4 control points, and does not take into account the nearness factor I wish to define.
Position of a point relative to a Bezier curve
Intersection between bezier curve and a line segment
EDIT:
An example curve:
e, 63.068, 127.26
29.124, 284.61
25.066, 258.56
20.926, 212.47
34, 176
38.706, 162.87
46.556, 149.82
54.393, 138.78
The description of the format is: "Every edge is assigned a pos attribute, which consists of a list of 3n + 1 locations. These are B-spline control points: points p0, p1, p2, p3 are the first Bezier spline, p3, p4, p5, p6 are the second, etc. Points are represented by two integers separated by a comma, representing the X and Y coordinates of the location specified in points (1/72 of an inch). In the pos attribute, the list of control points might be preceded by a start point ps and/or an end point pe. These have the usual position representation with a "s," or "e," prefix, respectively."
EDIT2: Further explanation of the "e" point (and s if present).
In the pos attribute, the list of control points might be preceded by a start
point ps and/or an end point pe. These have the usual position representation with a
"s," or "e," prefix, respectively. A start point is present if there is an arrow at p0.
In this case, the arrow is from p0 to ps, where ps is actually on the node’s boundary.
The length and direction of the arrowhead is given by the vector (ps −p0). If there
is no arrow, p0 is on the node’s boundary. Similarly, the point pe designates an
arrow at the other end of the edge, connecting to the last spline point.
You may do this analitically, but a little math is needed.
A Bezier curve can be expressed in terms of the Bernstein Basis. Here I'll use Mathematica, that provides good support for the math involved.
So if you have the points:
pts = {{0, -1}, {1, 1}, {2, -1}, {3, 1}};
The eq. for the Bezier curve is:
f[t_] := Sum[pts[[i + 1]] BernsteinBasis[3, i, t], {i, 0, 3}];
Keep in mind that I am using the Bernstein basis for convenience, but ANY parametric representation of the Bezier curve would do.
Which gives:
Now to find the minimum distance to a point (say {3,-1}, for example) you have to minimize the function:
d[t_] := Norm[{3, -1} - f[t]];
For doing that you need a minimization algorithm. I have one handy, so:
NMinimize[{d[t], 0 <= t <= 1}, t]
gives:
{1.3475, {t -> 0.771653}}
And that is it.
HTH!
Edit Regarding your edit "B-spline with consisting of (n−1)/3 cubic Bezier curves."
If you constructed a piecewise B-spline representation you should iterate on all segments to find the minima. If you joined the pieces on a continuous parameter, then this same approach will do.
Edit
Solving your curve. I disregard the first point because I really didn't understand what it is.
I solved it using standard Bsplines instead of the mathematica features, for the sake of clarity.
Clear["Global`*"];
(*first define the points *)
pts = {{
29.124, 284.61}, {
25.066, 258.56}, {
20.926, 212.47}, {
34, 176}, {
38.706, 162.87}, {
46.556, 149.82}, {
54.393, 138.78}};
(*define a bspline template function *)
b[t_, p0_, p1_, p2_, p3_] :=
(1-t)^3 p0 + 3 (1-t)^2 t p1 + 3 (1-t) t^2 p2 + t^3 p3;
(* define two bsplines *)
b1[t_] := b[t, pts[[1]], pts[[2]], pts[[3]], pts[[4]]];
b2[t_] := b[t, pts[[4]], pts[[5]], pts[[6]], pts[[7]]];
(* Lets see the curve *)
Show[Graphics[{Red, Point[pts], Green, Line[pts]}, Axes -> True],
ParametricPlot[BSplineFunction[pts][t], {t, 0, 1}]]
.
( Rotated ! for screen space saving )
(*Now define the distance from any point u to a point in our Bezier*)
d[u_, t_] := If[(0 <= t <= 1), Norm[u - b1[t]], Norm[u - b2[t - 1]]];
(*Define a function that find the minimum distance from any point u \
to our curve*)
h[u_] := NMinimize[{d[u, t], 0.0001 <= t <= 1.9999}, t];
(*Lets test it ! *)
Plot3D[h[{x, y}][[1]], {x, 20, 55}, {y, 130, 300}]
This plot is the (minimum) distance from any point in space to our curve (of course the value over the curve is zero):
First, render the curve to a bitmap (black and white) with your favourite algorithm. Then, whenever you need, determine the nearest pixel to the mouse position using information from this question. You can modify the searching function so that it will return distance, so you can easilly compare it with your requirements. This method gives you the distance with tolerance of 1-2 pixels, which will do, I guess.
Definition: distance from a point to a line segment = distance from the original point to the closest point still on the segment.
Assumption: an algo to compute the distance from a point to a segment is known (e.g. compute the intercept with the segment of the normal to the segment passing through the original point. If the intersection is outside the segment, pick the closest end-point of the segment)
use the deCasteljau algo and subdivide your cubics until getting to a good enough daisy-chain of linear segments. Supplementary info the "Bezier curve flattening" section
consider the minimum of the distances between your point and the resulted segments as the distance from your point to the curve. Repeat for all the curves in your set.
Refinement at point 2: don't compute the actual distance, but the square of it, getting the minimum square distance is good enough - saves a sqrt call/segment.
Computation effort: empirically a cubic curve with a maximum extent (i.e. bounding box) of 200-300 results in about 64 line segments when flattened to a maximum tolerance of 0.5 (approx good enough for the naked eye).
Each deCasteljau step requires 12 division-by-2 and 12 additions.
Flatness evaluation - 8 multiplications + 4 additions (if using the TaxiCab distance to evaluate a distance)
the evaluation of point-to-segment distance requires at max 12 multiplications and 11 additions - but this will be a rare case in the context of Bezier flattening, I'd expect an average of 6 multiplications and 9 additions.
So, assuming a very bad case (100 straight segments/cubic), you finish in finding your distance with a cost of approx 2600 multiplications + 2500 additions per considered cubic.
Disclaimers:
don't ask me for a demonstration on the numbers in
the computational effort evaluation above,
I'll answer with "Use the source-code" (note: Java implementation).
other approaches may be possible and maybe less costly.
Regards,
Adrian Colomitchi