In my project, I represent geometry using splines. For physics and rendering I preprocess the splines and convert them into lines, and later polygons, by sampling the splines at a regular interval. However, I want to reduce the number of vertices/lines by ignoring samples that are already well enough represented by a line.
Coming up short when searching, I was wondering if there are any traditional techniques to convert a curve to a set of vertices while reducing the resulting error.
EDIT: To clarify, the result I want to end up with is a number of vertices/line segments that best represent the spline with the fewest amount of vertices/line segments. I'm not sure how to define what "best represent the spline" really means, but the goal is to make it as hard as possible to distinguish the difference between the spline and the approximation.
It can be done by recursively refining part which is not near segment between part ends.
If we have curve (spline) C:[0,1]->R^n. Than first approximation is segment S between curve end points [C(0), C(1)]. Take point C(0.5) and check how far is it from segment S. If it is far than we have to take it in discretization, if not than S is good approximation. If C(0.5) is far, than next approximation is polyline [C(0), C(0.5), C(1)], and we make same procedure with parts [C(0), C(0.5)] and [C(0.5), C(1)].
If you are using polynomial spline of order >= 3 (e.g. cubic spline) than it can have inflection point(s). In that case it is possible that curve point on half can 'fall' right on segment, but curve around to be far from segment. In that case it is good to check one more level of sub-parts.
This is entirely based on my own intuition, so I'm not sure if it coincides AT ALL with best practices. I do have a mathematics degree, so hopefully it's not too far off. I'll have you note that the computation involved may outstrip performance gains granted by not using as many vertices if the spline needs to be recalculated frequently.
Let's say the vertices are in an array like [v(0), v(1), v(2),..., v(n)] where each v(i) is something like (x, y). By iterating over the vertices starting at v(1) and ending at v(n-1), we can compare a point with its neighbors in order to tell whether or not to discard it. Note that we ignore v(0) and v(n) for two reasons: (I assume) we don't want to remove our endpoints, and also v(0) and v(n) are missing a neighbor that we would need in order to set up our calculation. I can think of a couple possibilities here that might warrant examination, but one in particular seems (in my head) to be the best answer...
Consider the case where we're deciding whether or not to remove v(i) from the vertex array. We could examine the Cartesian distance between v(i) and its neighbors, and remove the point if both are below some threshold value T. For example if v(i-1) = (x1, y1) and v(i) = (x2, y2) and v(i+1) = (x3, y3), then we evaluate sqrt((x2-x1)^2 + (y2-y1)^2))<T && sqrt((x3-x2)^2 + (y3-y2)^2))<T, removing v(i) if the evaluation returns true.
In 3+ dimensions, this would become more complicated - the calculation would be similar, but you would require a method of determining a point's neighbors since they might not lie directly next to the examined point in the vertex array.
Related
I have a set of distinct 2D vectors (over real numbers), pointing in different directions. We are allowed to pick a pair of vectors and construct their linear combination, such that the coefficients are positive and their sum is 1.
In simple words we are allowed to take a "weighted average" of any two vectors.
My goal is for an arbitrary direction to pick a pair of vectors whose "weighted average" is in this direction and is maximized.
Speaking algebraically given vectors a and b and a direction vector n we are interested in maximizing this value:
[ a cross b ] / [ (a - b) cross n ]
i.e. pick a and b which maximize this value.
To be concrete the application of this problem is for sailing boats. For every apparent wind direction the boat will have a velocity given by a polar diagram. Here's an example of such a diagram:
(Each line in this diagram corresponds to a specific wind magnitude). Note the "impossible" front sector of about 30 degrees in each direction.
So that in some direction the velocity will be high, for some - low, and for some directions it's impossible to sail directly (for instance in the direction strictly opposite to the wind).
If we need to advance in a direction in which we can't sail directly (or the velocity isn't optimal) - it's possible to advance in zigzags. This is called tacking.
Now, my goal is to recalculate a new diagram which denotes the average advance velocity in any direction, either directly or indirectly. For instance for the above diagram the corrected diagram would be this:
Note that there are no more "impossible" directions. For some directions the diagram resembles the original one, where it's best to advance directly, and no maneuver is required. For others - it shows the maximum average advance velocity in this direction assuming the most optimal maneuver is periodically performed.
What would be the most optimal algorithm to calculate this? Assume the diagram is given as a discrete set of azimuth-velocity pairs, from which we can calculate the vectors.
So far I just check all the vector pairs to select the best. Well, there're cut-off criterias, such as picking only vectors with positive projection on the advance direction, and opposite perpendicular projections, but still the complexity is O(N^2).
I wonder if there's a more efficient algorithm.
EDIT
Many thanks to #mcdowella. For both computer-science and sailor answers!
I too thought in terms of convex polygon, figured out that it's only worth probing vectors on that hull (i.e. if you take a superposition of 2 vectors on this hull, and try to replace one of them by a vector which isn't on this hull, the result would be worse since new vector's projection on the needed direction is worse than of both source vectors).
However I didn't realize that any "weighted average" of 2 vectors is actually a straight line segment connecting those vectors, hence the final diagram is indeed this convex hull! And, as we can see, this is also in agreement with what I calculated by "brute-force" algorithm.
Now the computer science answer
A tacking strategy gives you the convex combination of the vectors from the legs that make up the tacks.
So consider the outline made by just one contour in your diagram. The set of all possible best speeds and directions is the convex polygon formed by taking all convex combinations of the vectors to the contour. So what you want to do is form the convex hull of your contour (https://en.wikipedia.org/wiki/Convex_hull). To find out how to go fast in any particular direction, intersect that vector with the convex hull, and use tacks with legs that correspond to the corners on either side of the edge of the convex hull that you intersect with.
Looking at your diagram, the contour is concave straight upwind and straight downwind, which is what you would expect. However there is also another concave section, somewhere between 4 and 5 O'Clock and also symmetrically between 7 and 8 O'Clock, which appears as a straight line in your corrected diagram - so I guess there is a third direction to tack in, using two reaches on the same side of the wind which I don't recognise from traditional sailing.
First the ex-laser sailor answer
At least for going straight upwind or downwind, the obvious guess is to tack so that each leg is of the same length and of the same bearing to the wind. If the polar diagram is symmetric around the upwind-downwind axis this is correct. Suppose upwind is the Y axis and possible legs are (A, B), (-A, B), (a, b) and (-a, b). Symmetrical tacking moves (A, B)/2 + (-A, B)/2 = (0, B) and the other symmetrical tack gives you (0, b). Asymmetrical tacking is (-A, B)a/(a+A) + (a, b)A/(a+A) = (0, (a/(a+A))B + (A/(a+A))b) and if b!=B lies between b and B and so is not as good as whichever of b or B is best.
For any direction which lies between the port and starboard tacks that you would take to work your way upwind, the obvious strategy is to change the length of those legs but not their direction so that the average vector traveled is in the required direction. Is this the best strategy? If not, the better strategy is making progress upwind faster that the port and starboard tacks that you would take to work your way upwind, which I think is a contradiction - so for any direction which lies between the port and starboard tacks made to go upwind I think the best strategy is indeed to make those tacks but alter the leg lengths to go in the required direction. The same thing should apply for tacking downwind, if you have a boat that makes that a good idea.
I'm working with a really slow renderer, and I need to approximate polygons so that they look almost the same when confined to a screen area containing very few pixels. That is, I'd need an algorithm to go through a polygon and subtract/move a bunch of vertices until the end polygon has a good combination of shape preservation and economy of vertice usage.
I don't know if there's a formal name for these kind of problems, but if anyone knows what it is it would help me get started with my research.
My untested plan is to remove the vertices that change the polygon area the least, and protect the vertices that touch the bounding box from removal, until the difference in area from the original polygon to the proposed approximate one exceeds a tolerance I specify.
This would all be done only once, not in real time.
Any other ideas?
Thanks!
You're thinking about the problem in a slightly off way. If your goal is to reduce the number of vertices with a minimum of distortion, you should be defining your distortion in terms of those same vertices, which define the shape. There's a very simple solution here, which I believe would solve your problem:
Calculate distance between adjacent vertices
Choose a tolerance between vertices, below which the vertices are resolved into a single vertex
Replace all pairs of vertices with distances lower than your cutoff with a single vertex halfway between the two.
Repeat until no vertices are removed.
Since your area is ultimately decided by the vertex placement, this method preserves shape and minimizes shape distortion. The one drawback is that distance between vertices might be slightly less intuitive than polygon area, but the two are proportional. If you really wish, you could run through the change in area that would result from vertex removal, but that's a lot more work for questionable benefit imo.
As mentioned by Angus, if you want a direct solution for the change in area, it's not actually super difficult. Was originally going to leave this as an exercise to the reader, but it's totally possible to solve this exactly, though you need to include vertices on either side.
Assume you're looking at a window of vertices [A, B, C, D] that are connected in that order. In this example we're determining the "cost" of combining B and C.
Calculate the angle offset from collinearity from A toward C. Basically you just want to see how far from collinear the two points are. This is |sin(|arctan(B - A)| - |arctan(C - A)|)| Where pipes are absolute value, and differences are the sensical notion of difference.
Calculate the total distance over which the angle change will effectively be applied, this is just the euclidean distance from A to B times the euclidean distance from B to C.
Multiply the terms from 2 and 3 to get your first term
To get your second term, repeat steps 2 - 4 replacing A with D, B with C, and C with B (just going in the opposite direction)
Calculate the geometric mean of the two terms obtained.
The number that results in step 6 presents the full-picture minus a couple constants.
I tried my own plan first: Protect the vertices touching the bounding box, then remove the rest in the order that changes the resultant area the least, until you can't find a vertice to remove that keeps the new polygon area within X% of the original one. This is the result with X = 5%:
When the user zooms out really far these shapes fit the bill well enough for me. I haven't tried any of the other suggestions. The savings are quite astonishing, sometimes from 80-100 vertices down to 4 or 5.
Given a 3d heightmap (from a laser scanner), how do I find the saddle points?
I.e. given something like this:
I am looking for all points where the curvature is positive in one direction and negative in the other.
(These directions should not need to be aligned with the X and Y axis.
I know how to check whether the curvature in X direction has the opposite sign as the curvature in Y direction, but that does not cover all cases. To make matters worse, the resolution in X is different from the resolution in Y)
Ideally I am looking for an algorithm that can tolerate some amount of noise and only mark "significant" saddle points.
I've been exploring a similar problem for a computational topology class and have had some success with the method outlined below.
First you will need a comparison function that will evaluate the height at two input points and will return < or > (not equal) for any input. One way to do this is that if the points are equal height you use some position-based or random index to find the greater point. You can think of this as adding an infinitesimal perturbation to the height.
Now, for each point, you will compare the height at all the surrounding neighbors (there will be 8 neighbors on a 2D rectangular grid). The lower link for a point will be the set of all neighbors for which the height is less than the point.
If all the neighboring values are in the lower link, you are at a local maximum. If none of the points are in the lower link you are at a local minimum. Otherwise, if the lower link is a single connected set, you are at a regular point on a slope. But if the lower link is two unconnected sets, you are at a saddle.
In 2D you can construct a list of the 8 neighboring point in cyclic order around the point you are checking. You assign a value of +/-1 for each neighbor depending on your comparison function. You can then step through that list (remember to compare the two end points) and count how many times the sign changes to determine the number of connected components in the lower link.
Determining which saddles are "important" is a more difficult analysis. You may wish to look at this: http://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Gyulassy08.pdf for some guidance.
-Michael
(From a guess at the maths rather than practical experience)
Fit a quadratic to the surface in a small patch around each candidate point, e.g. with least squares. How big the patch is is one way of controlling noise, and you might gain by weighting points depending on their distance from the candidate point. In matrix notation, you can represent the quadratic as x'Ax + b'x + c, where A is symmetric.
The quadratic will have zero gradient at x = (A^-1)b/2. If this not within the patch, discard it.
If A has both +ve and -ve eigenvalues you have a saddle point at x. Since A is only 2x2 and so has at most two eigenvalues, you can ignore the case when it as a zero eigenvalue and so you couldn't invert it at the previous stage.
Problem
Given a set of known cartesian points (set A), and a 2d transformation (rotation, translation, scale) of some subset of those points (set B), find the orientation of the subset (rotation, translation, scale) relative to the original set of points.
I.E. Suppose I take a "picture" of a known set of 2d points on a wall. I want to know what position the camera was in relative to "upright and centered" when the picture was taken. Some of the points may not be visible in the picture (they may be occluded). (in this analogy, assume the camera is orthoganal and always pointed directly at the plane of the wall, so you don't need to take distortion or perspective into account)
Proposed approach:
Step 1: Scale B to the same "range" as A
Don't know how; open to suggestions. Maybe take the area of a convex hull around all the points in B, and scale it to nearly that of the convex hull around A. This is tricky, because points may be missing from B.
Step 2: Match some arbitrary point in "B" to its twin in "A"
Pick some random point in set B. Call this point K. Somehow take a "fingerprint" of K relative to all the other points in B (using distance only). Find its match in A by fingerprinting all points in A and taking the point with the most similar fingerprint of K.
Step 3: Rotate B (around K) until all points in B are aligned with a point in A
Multiple solutions are possible, so keep rotating though 360d looking for solutions.
That's just shooting from the hip, I may be way off base. Anyone have any ideas?
Assuming you don't actually know the correspondence between the points in the two clouds, you could try a statistical approach.
First, compute the mean x0 of the original cloud, then compute the mean x1 of the subset cloud. The difference of the mean vectors, x1-x0, is a good estimate of the required translation.
Now, subtract the relevant mean vector from each set to give two clouds centered at the origin. Compute the covariance matrix for each cloud and find its eigenvalues and eigenvectors. The required rotation can be found from the eigenvectors, while the scaling corresponds to the eigenvalues.
Compose all of this and you should have a good statistical estimate of the desired transform. Obviously, its quality will be a function of how well the subset spans the original set.
"Give me a place to stand on, and I will move the Earth" Archimede
I think we should follow the steps of Archimede
Arpi's algoritm:
We must choose a point (X1) of set A with coordinates (0, 0). (this will be the place to stand on)
Choose another point (X2) and put it on the OX vector (to simplify things)
All the other points' coordinates from set A will be calculated based on the coordinates of X1(0, 0) and X2(some_Coordinate, 0).
Now, choose a point from set B (Y1) and that will be the center of the B set. Choose another point from set B (Y2) and put it to OX of the B set. Now, we have a scale scalar and a rotation angle. If this will be a solution, than Y1 in the B set represents X1 from the A set and Y2 from the B set represents X2 from the A set. If we can find a map between the B set and A set based on this, using all the points of the B set and Yi <> Yj if i <> j, where i and j are the indexes of the points in our representation than we have a potential solution and we store that.
End of Arpi's algoritm
To find all the potential solutions you must do the following:
foreach point in A as X1 do
foreach point in A as X2 do
arpi's algoritm(X1, X2)
Of course, you can optimize this, but for the sake of simplicity I described it without optimizations (complications), it will be your job to optimize this and only if you need that.
I would attempt to minimize the deviation between the target points and the found points. Meaning I would pair each target point with a found point, and apply any transformation (rotation, scale or skew) to all the target points which decreases the sum of the deviations. I would repeat this for all potential pairs, eventually taking the match to be the set of pairs and the necessary transformations with the smallest total deviation.
The real question is how you optimize this so the performance to be better than O(n^2). I suppose some sort of heuristic matching, perhaps caching the intermediary results, or finding a method of eliminating some pairs earlier in the process.
I have two shapes which are cross sections of a channel. I want to calculate the cross section of an intermediate point between the two defined points.
What's the simplest (relatively simple?) algorithm to use in this situation?
P.S.: I came across several algorithms like natural neighbor and poisson, which seemed complex. I'm looking for a simple solution, which could be implemented quickly.
EDIT: I removed the word "Simplest" from the title since it might be misleading
This is simple:
On each cross section draw N points at evenly spaced intervals along the boundary of the cross-section.
Draw straight lines from the n-th point on cross-section 1 to the n-th point on cross-section 2.
Take off your new cross-section at the desired distance between the old cross-sections.
Simpler still:
Use one of the existing cross-sections without modification.
This second suggestion might be too simple I suppose, but I bet no-one suggests a simpler one !
EDIT following OP's comment: (too much for a re-comment)
Well, you did ask for a simple method ! I'm not sure I see the same problem with the first method as you do. If the cross sections are not too weird (probably best if they are convex polygons) and you don't do anything strange such as map the left side of one cross-section to the right side of the other (thereby forcing lots of crossing lines) then the method should produce some kind of sensible cross section. In the case you suggest of a triangle and a rectangle, suppose the triangle is sitting on its base, one vertex at the top. Map that point to, say, the top left corner of the rectangle, then proceed in the same direction (clockwise or anti-clockwise) around the boundaries of both cross-sections joining corresponding points. I don't see any crossing lines, and I see a well-defined shape at any distance between the two cross-sections.
Note there are some ambiguities about High Performance Mark's answers you will probably need to address and will define the quality of the output of his method. The most important one is, when you draw the n points on both cross-sections, what sort of correspondence do you determine between them, that is if you do it that way High Performance Mark suggested, then the order of labeling the points becomes important.
I suggest rotating (orthogonal) plane simultaneously through both cross sections, then the set of points which intersect that plane on one cross section just need to be matched to the set of points that intersect that plane on the other cross section. Hypothetically, there is no limit on the number of points in these sets, but it certainly reduces the complexity of the correspondence problem in the original situation.
Here is another try at the problem, which I think is a much better attempt.
Given the two cross-sections C_1, C_2
Place each C_i into a global reference frame with coordinate system (x,y) so that the way they are relatively situated makes sense. Split each C_i into an upper and lower curve U_i and L_i. The idea is going to be that you will want to continuously deform curve U_1 to U_2 and L_1 to L_2. (Note you can extend this method to split each C_i into m curves if you wish.)
The way to do this is as follows. For each T_i = U_i, or L_i sample n points, and determine the interpolating polynomial P{T_i}(x). As some one duly noted below, interpolating polynomials are susceptible to oscillation especially at the endpoints. Instead of the interpolating polynomial, one may instead use the least squares fit polynomial which would be much more robust. Then define the deformation of the polynomial P{U_1}(x) = a_0 + a_1 * x + ... + a_n * x^n to P{U_2}(x) = b_0 + b_1 * x + ... + b_n * x^n as Q{P{U_1},P{U_2}}(x, t) = ( t * a_0 + (1 - t ) b_0 ) + ... + (t * a_n + (1-t) * b_n ) * x^n where the deformation Q is defined over 0<=t<=1 where t defines at which point the deformation is at (i.e. at t=0 we are at U_2 and at t=1 we are at U_1 and at every other t we are at some continuous deformation of the two.)
The exact same follows for Q{P{L_1},P{L_2}}(x, t). These two deformations construct you a continuous representation between the two cross-sections which you can sample at any t.
Note all this is really doing is linearly interpolation the coefficients of the interpolation polynomials of the two pieces of both cross-sections. Note also when spliting the cross-sections you should probably put the constraint that they must be split at end points that match up otherwise you may have "holes" in your deformation.
I hope thats clear.
edit: addressed the issue of oscillation in interpolating polynomials.