Consider the following algorithm for linear programming, minimizing [c,x] with A.x <= b.
(1) Start with a feasible point x_0
(2) Given a feasible point x_k, find the greatest alpha such that x_k - alpha.c is admissible (straighforward, look at the ratios of the components of A.x0 to A.c)
(3) take the normal unit vector n to the hyperplane we just reached, pointing inwards. Project n on the plane [c,.] giving r = n - [n,c]/[c,c].c, then look for the greatest beta for which x_k - alpha.c + beta.r is admissible. Set x_{k+1} = x_k - alpha.c + 1/2*beta.r
If x_{k+1} is close enough to x_k within tolerance, return it, otherwise go to (2)
The basic idea is to follow the gradient until one hits a wall. Then, rather than following the shell of the simplex, like the simplex algorithm would do, the solution is kicked back inside the simplex, on a plane where the solutions are no worse, in the direction of the normal vector. The solution moves halfway between the starting point and the next constraint in this direction. It's no worse than before, but now it's more "inside" the simplex, where is has a shot at taking long leaps towards the optimum.
Though the probability of hitting an intersection of more than one hyperplane is 0, if one gets close enough to multiple hyperplane within a certain tolerance, the average of the normals may be taken.
This can be generalized to any convex objective function by following geodesics on the levels of the function. For quadratic programming in particular, one rotates the solution towards the inside of the simplex.
Questions:
Does this algorithm have a name or fall within a category of linear-programming algorithms?
Does it have an obvious flaw that I'm overlooking?
I am pretty sure this doesn't work, unless I miss something: your algorithm will not start moving in most cases.
Assume your variable x is taken in R^n.
A polyhedron of the form Ax <= b is contained in a 'maximal' affine subspace P of dimension p <= n, and usually p is much smaller than n (you will have a lot of equality constraints, which can be implicit or explicit: you cannot assume that the expression of P is simple to obtain from A and b).
Now assume you can find an initial point x_0 (which is far from being obvious, btw) ; there is very little chance that the direction of the gradient c is a feasible direction. You would need to consider the projection of the direction c on P, and this is very difficult to do in practice (how would you compute such projection?).
Then, what you want in your step (3) is not the normal direction of the hyperplane you reached, but again its projection on P (visualize the polyedron as a 2d polyedron embedded in a 3d space can help).
There is a very good reason why barrier functions are used in the interior point methods: it is very difficult to describe in practice the geometry of the high-dimension convex sets from the constraints (even simple ones like polyedrons), and things that "seems obvious" when you draw a picture on a piece of paper will not usually work when the dimension of the polyedron increases.
One last point is that your algorithm would not give the exact solution, whereas the simplex does in theory (and I read somewhere it can be done in practice by working with exact rational numbers).
read up on interior point methods: http://en.wikipedia.org/wiki/Interior_point_method
this approach can have nice theoretical properties, but the algorithm performance can tend to tail off in practice
Related
I am trying to understand a particular approach of the DeWall algorithm to perform a 2D/3D delaunay triangulation/tetrahedralization (DT). I am especially interested in the 3D case. Where other divide and conquer algorithms create partial DT and merge them together, the DeWall algorithm directly builds the DT along hierarchical cuts:
However, I am stuck at the very beginning of constructing the first simplex. In Cignoni 1997 - DeWall: A Fast Divide & Conquer Delaunay Triangulation Algorithm in Eᵈ the authors write
MakeFirstSimplex selects the point p₁ ∈ P nearest to the plane α.
It then selects a second point p₂ such that p₂ is the nearest point to p₁ on the other side of α.
Then, it searches the point p₃ such that the circum-circle around the 1-face (p₁, p₂) and the point p₃ has the minimum radius;
(p₁ ; p₂; p₃) is therefore a 2-face of Σ.
The process continues until the required d-simplex is built.
However, if I understand correctly this procedure should always lead to a simplex that does belong to the DT, but when I test this with different point sets, it seems to happen sometimes that an edge is picked which does not belong to the DT.
I've tested this by connecting (green) each point (orange) to its left and right nearest neighbour. A DT made with the triangle program (gray) and the local left-right-splitting-plane (black) are also shown.
In the zoomed-in picture above, the green edges are picked by that prodedure, but one is not part the gray DT. So in a similar case where the plane α of the topmost point would be a bit right of the point and α would be a cut-plane of the DeWall algorithm, this would lead to a wrong simplex.
It is known, that the nearest neighbour graph is always part of a DT, but that does not help in our case since is not guaranteed that a cut-plane α always crosses one of these.
Is there an explanation why this should work in the first place, some trick or an alternative construction to obtain the very first simplex at that constrained position?
I am trying to create an algorithm that calculate the normals of a model/ mesh. People have been telling me to use the cross products between the two vectors which at first seem like a good idea until I discovered that it might not always work. For instance just imagine a box with its front face sitting at the origin and its back face down the Z axis. Here is an image:
I do apologize for bad hand writing but that shouldn't be of any significance. As you can see,I cross v and u to get the normal pointing toward the positive z axis. However, If I use that same calculation to calculate the normal for the back face then obviously the normal will then be a vector directing inside the shape. The result is that I have inaccurate normals to calculate the brightness of a light. I want the normal to be facing away from the model at all time.
I know there gotta be a better way to calculate the normal but I don't know what it is. Can anyone suggests to me another algorithm to calculate the normal that would get rid of this problem? If not then there has to be a way to check whether or not a normal is facing inside the object / model. If so then can you suggests it in the answer and where I would find an explanation about it because I would love to have an intuition on how these methodologies work.
Most software packages obey a configurable cyclic ordering for triangle indices - clockwise or anti-clockwise. Thus all meshes they export have self-consistent ordering, and as long as your program uses the same convention, you should have nothing to worry about.
Having said that, I imagine you want to know what to do in the hypothetical (?) situation where the index ordering is inconsistent.
One method we could use is ray-intersection. The important theorem is that a ray with its source outside the mesh will only intersect the mesh an even number of times, and if inside, odd.
To do this, we can do the following:
Calculate the "normal" using the cross product as above (and normalize it) => N
Take any point on the triangle (preferably the midpoint)
Increment this point along the normal by some small epsilon value (depends on your floating point format and size of model - I'd say 1e-4 for single and 1e-8 for double precision) => P
Intersect this ray [dir = N, src = P] with all triangles in the mesh (a good algorithm for this would be Möller–Trumbore)
If the number of intersections is even, then the ray started from outside of the mesh; this means that the normal points outwards from the mesh (because you incremented its source from a point on the surface). - and of course, vice versa.
Minor (-ish ?) digression: a naive approach to the above, of looping through all triangles in the mesh, would be O(n) - and hence the whole procedure would have quadratic time complexity. This is perfectly fine for very small meshes of ~20 triangles (e.g. a box), but not ideal for any larger!
You can use spatial sub-division techniques to lower the cost of this intersection step:
K-D trees / Octrees: These require O(n log n) (for the best algorithm, that is - see Ingo Wald's paper) to construct, but intersections are guaranteed to be O(log n) if done properly. The overall complexity would then be O(n log n), which is pretty much the best you can get
Grid: This simply partitions the search space and triangles into smaller boxes. Construction is O(n) and much more memory-efficient. Intersection time is still O(n), but the constant factor is much smaller than that of the naive approach.
Cross products are not commutative so v x u is not the same as u x v. In fact, they will be the exact opposite.
For the front face, you want to take u x v (assuming you're in a right-hand coordinate system), and the back face you want to cross v x u.
See right-hand rule for more info on how crossing vectors works.
In my project, I represent geometry using splines. For physics and rendering I preprocess the splines and convert them into lines, and later polygons, by sampling the splines at a regular interval. However, I want to reduce the number of vertices/lines by ignoring samples that are already well enough represented by a line.
Coming up short when searching, I was wondering if there are any traditional techniques to convert a curve to a set of vertices while reducing the resulting error.
EDIT: To clarify, the result I want to end up with is a number of vertices/line segments that best represent the spline with the fewest amount of vertices/line segments. I'm not sure how to define what "best represent the spline" really means, but the goal is to make it as hard as possible to distinguish the difference between the spline and the approximation.
It can be done by recursively refining part which is not near segment between part ends.
If we have curve (spline) C:[0,1]->R^n. Than first approximation is segment S between curve end points [C(0), C(1)]. Take point C(0.5) and check how far is it from segment S. If it is far than we have to take it in discretization, if not than S is good approximation. If C(0.5) is far, than next approximation is polyline [C(0), C(0.5), C(1)], and we make same procedure with parts [C(0), C(0.5)] and [C(0.5), C(1)].
If you are using polynomial spline of order >= 3 (e.g. cubic spline) than it can have inflection point(s). In that case it is possible that curve point on half can 'fall' right on segment, but curve around to be far from segment. In that case it is good to check one more level of sub-parts.
This is entirely based on my own intuition, so I'm not sure if it coincides AT ALL with best practices. I do have a mathematics degree, so hopefully it's not too far off. I'll have you note that the computation involved may outstrip performance gains granted by not using as many vertices if the spline needs to be recalculated frequently.
Let's say the vertices are in an array like [v(0), v(1), v(2),..., v(n)] where each v(i) is something like (x, y). By iterating over the vertices starting at v(1) and ending at v(n-1), we can compare a point with its neighbors in order to tell whether or not to discard it. Note that we ignore v(0) and v(n) for two reasons: (I assume) we don't want to remove our endpoints, and also v(0) and v(n) are missing a neighbor that we would need in order to set up our calculation. I can think of a couple possibilities here that might warrant examination, but one in particular seems (in my head) to be the best answer...
Consider the case where we're deciding whether or not to remove v(i) from the vertex array. We could examine the Cartesian distance between v(i) and its neighbors, and remove the point if both are below some threshold value T. For example if v(i-1) = (x1, y1) and v(i) = (x2, y2) and v(i+1) = (x3, y3), then we evaluate sqrt((x2-x1)^2 + (y2-y1)^2))<T && sqrt((x3-x2)^2 + (y3-y2)^2))<T, removing v(i) if the evaluation returns true.
In 3+ dimensions, this would become more complicated - the calculation would be similar, but you would require a method of determining a point's neighbors since they might not lie directly next to the examined point in the vertex array.
I'm looking for an algorithm to find the best fit between a cloud of points and a sphere.
That is, I want to minimise
where C is the centre of the sphere, r its radius, and each P a point in my set of n points. The variables are obviously Cx, Cy, Cz, and r. In my case, I can obtain a known r beforehand, leaving only the components of C as variables.
I really don't want to have to use any kind of iterative minimisation (e.g. Newton's method, Levenberg-Marquardt, etc) - I'd prefer a set of linear equations or a solution explicitly using SVD.
There are no matrix equations forthcoming. Your choice of E is badly behaved; its partial derivatives are not even continuous, let alone linear. Even with a different objective, this optimization problem seems fundamentally non-convex; with one point P and a nonzero radius r, the set of optimal solutions is the sphere about P.
You should probably reask on an exchange with more optimization knowledge.
You might find the following reference interesting but I would warn you
that you will need to have some familiarity with geometric algebra -
particularly conformal geometric algebra to understand the
mathematics. However, the algorithm is straight forward to implement with
standard linear algebra techniques and is not iterative.
One caveat, the algorithm, at least as presented fits both center and
radius, you may be able to work out a way to constrain the fit so the radius is constrained.
Total Least Squares Fitting of k-Spheres in n-D Euclidean Space Using
an (n+ 2)-D Isometric Representation. L Dorst, Journal of Mathematical Imaging and Vision, 2014 p1-21
Your can pull in a copy from
Leo Dorst's researchgate page
One last thing, I have no connection to the author.
Short description of making matrix equation could be found here.
I've seen that WildMagic Library uses iterative method (at least in version 4)
You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus): see the appendix of the open access paper of Cerisier et al. in J. Comput. Biol. 24(11), 1134-1137 (2017), https://doi.org/10.1089/cmb.2017.0061
It works when the data points are weighted (it works even for continuous distributions; as a by-product, when d=1, a well-known inequality is retrieved: the kurtosis is always greater than the squared skewness plus 1).
Difficult to do this without iteration.
I would proceed as follows:
find the overall midpoint, by averaging (X,Y,Z) coords for all points
with that result, find the average distance Ravg to the midpoint, decide ok or proceed..
remove points from your set with a distance too far from Ravg found in step 2
go back to step 1 (average points again, yielding a better midpoint)
Of course, this will require some conditions for (2) and (4) that depends on the quality of your points cloud !
Ian Coope has an interesting algorithm in which he linearized the problem using a change of variable. The fit is quite robust, and although it slightly redefines the condition of optimality, I've found it to be generally visually better, especially for noisy data.
A preprint of Coope's paper is available here: https://ir.canterbury.ac.nz/bitstream/handle/10092/11104/coope_report_no69_1992.pdf.
I found the algorithm to be very useful, so I implemented it in scikit-guess as skg.nsphere_fit. Let's say you have an (m, n) array p, consisting of M points of dimension N (here N=3):
r, c = skg.nsphere_fit(p)
The radius, r, is a scalar and c is be an n-vector containing the center.
I have two shapes which are cross sections of a channel. I want to calculate the cross section of an intermediate point between the two defined points.
What's the simplest (relatively simple?) algorithm to use in this situation?
P.S.: I came across several algorithms like natural neighbor and poisson, which seemed complex. I'm looking for a simple solution, which could be implemented quickly.
EDIT: I removed the word "Simplest" from the title since it might be misleading
This is simple:
On each cross section draw N points at evenly spaced intervals along the boundary of the cross-section.
Draw straight lines from the n-th point on cross-section 1 to the n-th point on cross-section 2.
Take off your new cross-section at the desired distance between the old cross-sections.
Simpler still:
Use one of the existing cross-sections without modification.
This second suggestion might be too simple I suppose, but I bet no-one suggests a simpler one !
EDIT following OP's comment: (too much for a re-comment)
Well, you did ask for a simple method ! I'm not sure I see the same problem with the first method as you do. If the cross sections are not too weird (probably best if they are convex polygons) and you don't do anything strange such as map the left side of one cross-section to the right side of the other (thereby forcing lots of crossing lines) then the method should produce some kind of sensible cross section. In the case you suggest of a triangle and a rectangle, suppose the triangle is sitting on its base, one vertex at the top. Map that point to, say, the top left corner of the rectangle, then proceed in the same direction (clockwise or anti-clockwise) around the boundaries of both cross-sections joining corresponding points. I don't see any crossing lines, and I see a well-defined shape at any distance between the two cross-sections.
Note there are some ambiguities about High Performance Mark's answers you will probably need to address and will define the quality of the output of his method. The most important one is, when you draw the n points on both cross-sections, what sort of correspondence do you determine between them, that is if you do it that way High Performance Mark suggested, then the order of labeling the points becomes important.
I suggest rotating (orthogonal) plane simultaneously through both cross sections, then the set of points which intersect that plane on one cross section just need to be matched to the set of points that intersect that plane on the other cross section. Hypothetically, there is no limit on the number of points in these sets, but it certainly reduces the complexity of the correspondence problem in the original situation.
Here is another try at the problem, which I think is a much better attempt.
Given the two cross-sections C_1, C_2
Place each C_i into a global reference frame with coordinate system (x,y) so that the way they are relatively situated makes sense. Split each C_i into an upper and lower curve U_i and L_i. The idea is going to be that you will want to continuously deform curve U_1 to U_2 and L_1 to L_2. (Note you can extend this method to split each C_i into m curves if you wish.)
The way to do this is as follows. For each T_i = U_i, or L_i sample n points, and determine the interpolating polynomial P{T_i}(x). As some one duly noted below, interpolating polynomials are susceptible to oscillation especially at the endpoints. Instead of the interpolating polynomial, one may instead use the least squares fit polynomial which would be much more robust. Then define the deformation of the polynomial P{U_1}(x) = a_0 + a_1 * x + ... + a_n * x^n to P{U_2}(x) = b_0 + b_1 * x + ... + b_n * x^n as Q{P{U_1},P{U_2}}(x, t) = ( t * a_0 + (1 - t ) b_0 ) + ... + (t * a_n + (1-t) * b_n ) * x^n where the deformation Q is defined over 0<=t<=1 where t defines at which point the deformation is at (i.e. at t=0 we are at U_2 and at t=1 we are at U_1 and at every other t we are at some continuous deformation of the two.)
The exact same follows for Q{P{L_1},P{L_2}}(x, t). These two deformations construct you a continuous representation between the two cross-sections which you can sample at any t.
Note all this is really doing is linearly interpolation the coefficients of the interpolation polynomials of the two pieces of both cross-sections. Note also when spliting the cross-sections you should probably put the constraint that they must be split at end points that match up otherwise you may have "holes" in your deformation.
I hope thats clear.
edit: addressed the issue of oscillation in interpolating polynomials.