Calculating principal curvature directions on discrete mesh without quadratic fitting - computational-geometry

I am working with 2d surfaces embedded in 3d, with a discrete triangulation, and would like to calculate to principal curvature directions (eigenvectors of the curvature tensor). What I already know is summarised in the following post: How to get principal curvature of a given mesh?. Basically, they talk about fitting points to a quadratic polynomial, and then diagonalising the obtained quadratic form.
My question is, is there any faster way of finding the eigenvectors? I have to do this over and over again, hence the need for speed. It is easy to find out the eigenvalues of the curvature tensor, namely the Gaussian curvature (using angular deficits) and Mean curvature (using the Laplacian). Are there any existing simpler algorithms for the eigenvectors?
PS: I am working in Python, if that helps.

When you know an Eigenvalue e of a 3x3 matrix M, an Eigenvector is given by the cross product of two columns of M - e.I, which is low cost.
To avoid degenerate cases, it is better to choose the pair of columns that yields the longest vector, but this triples the cost.

To avoid quadratic surface fitting and estimate principal curvatures and directions in a point of triangular mesh, one can use vertex normals in addition to vertex coordinates as described in the article Estimating Curvatures and Their Derivatives on Triangle Meshes by Szymon Rusinkiewicz.
See this answer for more detail.

Related

Algorithm: How to smoothly interpolate/reconstruct sparse samples with noise?

This question is not directly related to a particular programming language but is an algorithmic question.
What I have is a lot of samples of a 2D function. The samples are at random locations, they are not uniformly distributed over the domain, the sample values contain noise and each sample has a confidence-weight assigned to it.
What I'm looking for is an algorithm to reconstruct the original 2D function based on the samples, so a function y' = G(x0, x1) that approximates the original well and interpolates areas where samples are sparse smoothly.
It goes into the direction of what scipy.interpolate.griddata is doing, but with the added difficulty that:
the sample values contain noise - meaning that samples should not just be interpolated, but nearby samples also averaged in some way to average out the sampling noise.
the samples are weighted, so, samples with higher weight should contrbute more strongly to the reconstruction that those with lower weight.
scipy.interpolate.griddata seems to do a Delaunay triangulation and then use the barycentric cordinates of the triangles to interpolate values. This doesn't seem to be compatible with my requirement of weighting samples and averaging noise though.
Can someone point me in the right direction on how to solve this?
Based on the comments, the function is defined on a sphere. That simplifies life because your region is both well-studied and nicely bounded!
First, decide how many Spherical Harmonic functions you will use in your approximation. The fewer you use, the more you smooth out noise. The more you use, the more accurate it will be. But if you use any of a particular degree, you should use all of them.
And now you just impose the condition that the sum of the squares of the weighted errors should be minimized. That will lead to a system of linear equations, which you then solve to get the coefficients of each harmonic function.

Calculating the similarity of 2 sets of convex polygons?

I have generated 2 sets of convex polygons from with different algorithms. Every polygon in each set is described by an array of coordinates[n_points, xy_coords], so a square is described by an array [4,2] but a pentagon with rounded corners has [80,2], with the extra 75 points being used to describe the curvatures.
My goal is to quantify how similar the two sets of geometries are.
Can anyone recommend any methods of doing so?
So far I've come across:
Hamming Distance
Hausdorff distance
I would like to know what other robust measures of similarity for 2D polygons. The method ideally needs to be robust across convex polygons and and give a measure of similarity between large sets (10,000+ each).
As I understood you look for shape similarity or shape analysis algorithms. You will find more robust methods here: https://www.cs.princeton.edu/courses/archive/spr00/cs598b/lectures/polygonsimilarity/polygonsimilarity.pdf
and
https://student.cs.uwaterloo.ca/~cs763/Projects/phil.pdf
Turning function;
Graph matching;
Shape signature.
Assuming that both polygons are aligned, centered and convex you could try to assess similarity by computing ratio of area a smaller polygon to area of convex hull of both polygons.
Ratio = Min(Area(A), Area(B)) / Area(ConvexHull(A, B))
The ratio will be 1 if both polygon are equal, and 0 if the differ severely like a point and a square.
Area of the polygon can be computed in O(N) time. See Area of polygon.
The convex hull can be computed in O(N log N). See Convex Hull Computation. It could be speed up to O(N) by merge-sorting already sorted vertices of both polygons and applying the second phase of Graham Scan Algorithm.

How to find the intersection point of a 3D curve and a 3D surface?

I am trying to find the intersection point of a curve and a 3D surface with no luck. The surface is in the shape of a cone, and the curve is hyperbolic, as are shown in the figure.
CONE AND THE CURVE
This simulates a ray hits a certain surface. I tried to use bisection method, but it doesn't seem to work. then I tried newton's algorithm, but the results are still not good.
Is there any other good algorithms out there which are suitable for solving this kind of problem?
With the curve given in parametric form
x = fx(t)
y = fy(t)
z = fz(t)
and the surface by one equation of the form
g(x,y,z) = 0
just plug in the curve functions and bisection should work:
g(fx(t), fy(t), fz(t)) = 0
The only problem is to find suitable starting points t1 and t2 where g has opposite sign.
Problem
You are searching for a curve-surface intersection algorithm. Note that both curves and surfaces can be represented in either implicit form or in parametric form. Surface in implicit form is defined by equation F(x, y, z) = 0, which is a quadratic polynomial of x, y, z in case of conic surface. Surface in parametric form is defined by point-valued function S(u, v) of its parameters (e.g. you can use distance along cone axis and polar angle as parameters of conical surface). Curve is usually described only in parametric form, as a function C(t) with parameter t, which could be quadratic for a hyperbolic curve.
Implicit surface
The simplest cases of all is to treat your problem as an intersection of parametric curve against implicit surface. In this case you can write down a single equation q(t) = F(C(t)) = 0 with single variable t. Of course, Newton's iteration is not guaranteed to find all solutions in general case, bisection can only surely find one solution if you find two points with different sign of q(t).
In your case q(t) is a quartic polynomial (after putting quadratic curve parametrization into quadratic surface equation). It can be theoretically solved with Ferrari's analytic formula, but I strongly advise against it, because it is quite unstable numerically. You can apply any popular polynomial solver here, like Jenkins-Traub algorithm or eigenvalues algorithm for companion matrix (also see this question). You can also use methods of interval mathematics: for example, you can recursively subdivide the domain interval of parameter t into smaller pieces, while pruning all the pieces that surely do not contain zeros (interval arithmetic would help you to detect such pieces).
Parametric surface
Now we can move on to the case when both the curve and the surface are represented parametrically. I do not know any solutions that could benefit from the fact the your surface is conical and your curve is hyperbolic, so you have to apply the general curve-surface intersection algorithm. Alternatively, you can fit an implicitly-defined cone into your parametric surface, then use the solution above for quartic polynomial roots.
A lot of reliable general intersection algorithms are based on the subdivision method (which is actually interval mathematics again). The general idea is to continiously divide the curve and the surface into smaller and smaller pieces. The pairs of pieces which surely do not intersect are dropped as soon as possible. At the end you'll have a set of small piece pairs, tightly bounding your intersection points. Yoy might want to run Newton's iteration from them in order to make intersection points precise.
Here is the outline of a sample algorithm:
Start with a single curve piece (the whole input curve) and a single surface
piece (whole surface), and one potentially intersection pair (PIP) of these pieces.
Subdivide each curve piece into two halves (by parameter), subdivide each surface piece into four quadrants (by both parameters).
For each old PIP check all 8 pairs of curve half vs surface quadrant. If they surely do not intersect, forget them. If they can intersect, save them as a new PIP.
Unless all pieces are small enough, repeat from step 2 with new pieces and PIP-s.
For each pair of curve piece and surface piece, you have to check whether they can potentially intersect, which can be easily done by checking their axis-aligned bounding boxes. Also, you can represent your curves and surfaces as NURBS, in which case you can use convex hulls as tighter bounding volumes.
Generally, there are tons of variations and improvements of this algorithms. I advise the following literature for deeper knowledge:
Shape interrogation for computer-aided design and manufacturing.
chapter 4: for root solvers
section 5.7: for curve-surface intersection
PhD of Michael Hohmeyer.
section 4.5: for curve-surface intersection
sections 4.1 and 4.2: for convex hulls intersection (if you are brave enough).
Bottom line
If you are seeking for a simple and working solution, and you are sure that hyperbolas and cones are the only things you have to worry about, then you'd better use implicit definition of cone and solve quartic equation with some standard numerical algorithm from a good library available to you.

Algorithm to check if a polygon is a projection of a polyhedron

I am trying to develop an algorithm that performs the following :
Given a 2D polygon and a 3D polyhedron, determine if the 2D polygon is a projection of the 3D polyhedron (a perspective projection to be precise) without knowing which transformation matrix we may have possibly used for the projection.
input
{2D Polygon}
{3D Polyhedron}
output
{bool} whether or not it's a perspective projection
I am not asking for code, but I would simply like to know if this is feasible in polynomial time.
Any help will be greatly appreciated.
A 3D to 2D perspective projection has 7 degrees of freedom (6 for the relative motion of the scene with respect to the camera, 1 for the focal length).
Select four vertices in the 2D projection and consider all possible correspondences with polyhedron vertices (there is a polynomial number of such associations). Then form a system of 7 equations in the 7 unknown parameters (unfortunately a nonlinear one; maybe the eighth equation can be useful to select among multiple solutions).
Knowing the parameters, you can check a solution by re-projecting the polyhedron and comparing to the polygon (with further search for correspondences with vertices and edges).
All of this will take polynomial time (quartic if I am right), if one admits that the solver takes bounded time (hence bounded precision).
If the focal length is known, then a better approach is possible. Indeed, with only 6 unknowns, you can find the projection parameters from the projection of just three points. This problem is known to have an analytical solution (actually up to 4 of them), as described at length in "New Algorithms for the Perspective-Three-Point Problem, GAO Xiaoshan & CHEN Hangfei, Vol.16 No.3 J. Comput. Sci. & Technol."
This should lead to an O(N³) exact procedure.
More generally speaking, you form putative correspondences between N pairs of points, solve the corresponding Perspective-N-point problem, and check the hypothesis by reprojecting the polyhedron and comparing to the known projection to validate the hypothesis.
Just an idea for an algorithm:
Take a triangle of the projection made of three points next to each other not on the same line. Iterate through all corresponding triangles of the original. For all possible projections that solve the pair of triangles, check if the rest matches.
I must admit I am not sure right now if there could be infinite solutions for triangles (which would be hard to iterate)? If so, start with four points.
I think it is possible but you have to do a fair amount of reverse engineering. A 2D sketch that represents a 3D object is known as an Orthographic Projection. The link shows you the transformation matrices you need apply to transform the 3D point onto its 2D projection. Now, how do you go the opposite way? Inverse matrices with a mix of some inverse transformations (translation, scaling, rotation...)? I think this is a good lead to follow.

Calculation of centroid & volume of a polyhedron when the vertices are given

Given the location of vertices of a convex polyhedron (3D), I need to calculate the centroid and volume of the polyhedron. Following code is available at Mathworks site.
function C = centroid(P)
k=convhulln(P);
if length(unique(k(:)))<size(P,1)
error('Polyhedron is not convex.');
end
T = delaunayn(P);
n = size(T,1);
W = zeros(n,1);
C=0;
for m = 1:n
sp = P(T(m,:),:);
[null,W(m)]=convhulln(sp);
C = C + W(m) * mean(sp);
end
C=C./sum(W);
return
end
The code is elegant but is terribly slow. I need to calculate the volume and centroid of thousands of polyhedrons hundreds of times. Using this code in its current state is not feasible. Does anyone know a better approach or can this code be made faster? There are some minor changes I can think of such as, replacing mean with expression for mean.
There is a much simpler approach to calculate the volume with minimal effort. The first flavour uses 3 local topological information sets of the polyhedron, the tangent unit vector of the edges, the unit vectors of the in-plane normal on this tangent and the unit vector of the facet itself (which are very simple to extract from the vertices). Please refer to Volume of a Polyhedron for further details.
The second flavour uses the face areas, the normal vectors and the face barycenters to calculate the polyhedron volume according to this Wikipedia Article.
Both algorithms are rather simple and very easy to implement and through the simple summing structure easy to vectorize too. I suppose that both approaches will be much faster than doing a fully fledged tesselation of the polyhedron.
The centroid of the polyhedron can then be calculated by applying the divergence theorem transferring the integration over the full polyhedron volume into an integration over the polyhedron surface. A detailed description can be found in Calculating the volume and centroid of a polyhedron in 3d. I did not check if the tesselation of the polyhedron into triangles is really necessary or one could work with the more complex polygon surfaces of the polyhedron too, but in any case the area tessellation of the faces is much simpler than the volume tesselation.
In total such a combined approach should be much faster than the volume approach.
Thinking your only option if quickhull isn't good enough is cudahull if you want exact solutions. Although, even then you're only going to get about a 40x increase max (it seems).
I'm assuming that your the convex hulls you make each have at least 10 vertices (if it's much less than that, there isn't much you can do). If you don't mind "close enough" solutions. You could create a version of quickhull that limits the number the of vertices per polygon. The number of vertices you limit the calculation to will also allow for calculation of maximum error if needed.
The thing is that as the number of vertices on the convex hull approach infinity, you end up with a sphere. This means due to the way quick hull works, each additional vertex you add to the convex hull has less of an effect* than the ones before it.
*Depending on how quickhull is coded, this may only be true in a general sense. Making this true in practice would require modifying quickhull's recursion algorithm, so while the "next vertex" is always calculated (except for after the last vertex is added, or no points remain for that section), vertices are actually added to the convex hull in the order that maximizes the increase to the polyhedrons volume (likely by order of most distant to least distant). You'll incur some performance cost for keeping track of the order to add vertex's but as long as the ratio of pending convex hull points vs pending points is high enough, it should be worth it. As for error, the best option is probably to stop the algorithm either when the actual convex hull is reached, or the max increase to volume gets smaller than a certain fraction of the current total volume. If performance is more important, then simply limit the number of convex hull points per polygon.
You could also look at the various approximate convex hull algorithms, but the method I outlined above should work well for volume/centroid approximation with ability to determine error.
How much you can speed up your code depends a lot on how you want to calculate your centroid. See this answer about centroid calculation for your options. It turns out that if you need the centroid of the solid polyhedron, you're basically out of luck. If, however, only the vertices of the polyhedron have weight, then you could simply write
[k,volume] = convhulln(P);
centroid = mean(P(k,:));

Resources