geometric algebra, complex vector space, hyperplane reflection - computational-geometry

in relation to some audio processing I would like do a hyperplane reflection in a complex vector space using clifford geometric algebra. I have been looking on the web for a few weeks but have not really found an answer. For a longer time I have used hyperplane reflections in n-dimensional real vector spaces done with geometric algebra, (every basis element : e1, e2, ...en squares to +1). Vector 'a' is normalized to length 1. Vector 'x' reflected in a hyperplane orthogonal to vector 'a' can be computed : x -> -axa. I have no problems here, I have thoroughly tested and used my functions. I am trying to write a similar function for vectors 'a' and 'x' both being complex of dimension n. My guess is x -> -ax(a_conjugate). ||a|| = 1. But this is not right / it does not work. Any hints towards a formula or good sources on the subject are most welcome. Thanks very much in advance.

Related

How to find the intersection point of a 3D curve and a 3D surface?

I am trying to find the intersection point of a curve and a 3D surface with no luck. The surface is in the shape of a cone, and the curve is hyperbolic, as are shown in the figure.
CONE AND THE CURVE
This simulates a ray hits a certain surface. I tried to use bisection method, but it doesn't seem to work. then I tried newton's algorithm, but the results are still not good.
Is there any other good algorithms out there which are suitable for solving this kind of problem?
With the curve given in parametric form
x = fx(t)
y = fy(t)
z = fz(t)
and the surface by one equation of the form
g(x,y,z) = 0
just plug in the curve functions and bisection should work:
g(fx(t), fy(t), fz(t)) = 0
The only problem is to find suitable starting points t1 and t2 where g has opposite sign.
Problem
You are searching for a curve-surface intersection algorithm. Note that both curves and surfaces can be represented in either implicit form or in parametric form. Surface in implicit form is defined by equation F(x, y, z) = 0, which is a quadratic polynomial of x, y, z in case of conic surface. Surface in parametric form is defined by point-valued function S(u, v) of its parameters (e.g. you can use distance along cone axis and polar angle as parameters of conical surface). Curve is usually described only in parametric form, as a function C(t) with parameter t, which could be quadratic for a hyperbolic curve.
Implicit surface
The simplest cases of all is to treat your problem as an intersection of parametric curve against implicit surface. In this case you can write down a single equation q(t) = F(C(t)) = 0 with single variable t. Of course, Newton's iteration is not guaranteed to find all solutions in general case, bisection can only surely find one solution if you find two points with different sign of q(t).
In your case q(t) is a quartic polynomial (after putting quadratic curve parametrization into quadratic surface equation). It can be theoretically solved with Ferrari's analytic formula, but I strongly advise against it, because it is quite unstable numerically. You can apply any popular polynomial solver here, like Jenkins-Traub algorithm or eigenvalues algorithm for companion matrix (also see this question). You can also use methods of interval mathematics: for example, you can recursively subdivide the domain interval of parameter t into smaller pieces, while pruning all the pieces that surely do not contain zeros (interval arithmetic would help you to detect such pieces).
Parametric surface
Now we can move on to the case when both the curve and the surface are represented parametrically. I do not know any solutions that could benefit from the fact the your surface is conical and your curve is hyperbolic, so you have to apply the general curve-surface intersection algorithm. Alternatively, you can fit an implicitly-defined cone into your parametric surface, then use the solution above for quartic polynomial roots.
A lot of reliable general intersection algorithms are based on the subdivision method (which is actually interval mathematics again). The general idea is to continiously divide the curve and the surface into smaller and smaller pieces. The pairs of pieces which surely do not intersect are dropped as soon as possible. At the end you'll have a set of small piece pairs, tightly bounding your intersection points. Yoy might want to run Newton's iteration from them in order to make intersection points precise.
Here is the outline of a sample algorithm:
Start with a single curve piece (the whole input curve) and a single surface
piece (whole surface), and one potentially intersection pair (PIP) of these pieces.
Subdivide each curve piece into two halves (by parameter), subdivide each surface piece into four quadrants (by both parameters).
For each old PIP check all 8 pairs of curve half vs surface quadrant. If they surely do not intersect, forget them. If they can intersect, save them as a new PIP.
Unless all pieces are small enough, repeat from step 2 with new pieces and PIP-s.
For each pair of curve piece and surface piece, you have to check whether they can potentially intersect, which can be easily done by checking their axis-aligned bounding boxes. Also, you can represent your curves and surfaces as NURBS, in which case you can use convex hulls as tighter bounding volumes.
Generally, there are tons of variations and improvements of this algorithms. I advise the following literature for deeper knowledge:
Shape interrogation for computer-aided design and manufacturing.
chapter 4: for root solvers
section 5.7: for curve-surface intersection
PhD of Michael Hohmeyer.
section 4.5: for curve-surface intersection
sections 4.1 and 4.2: for convex hulls intersection (if you are brave enough).
Bottom line
If you are seeking for a simple and working solution, and you are sure that hyperbolas and cones are the only things you have to worry about, then you'd better use implicit definition of cone and solve quartic equation with some standard numerical algorithm from a good library available to you.

Linear programming algorithm

Consider the following algorithm for linear programming, minimizing [c,x] with A.x <= b.
(1) Start with a feasible point x_0
(2) Given a feasible point x_k, find the greatest alpha such that x_k - alpha.c is admissible (straighforward, look at the ratios of the components of A.x0 to A.c)
(3) take the normal unit vector n to the hyperplane we just reached, pointing inwards. Project n on the plane [c,.] giving r = n - [n,c]/[c,c].c, then look for the greatest beta for which x_k - alpha.c + beta.r is admissible. Set x_{k+1} = x_k - alpha.c + 1/2*beta.r
If x_{k+1} is close enough to x_k within tolerance, return it, otherwise go to (2)
The basic idea is to follow the gradient until one hits a wall. Then, rather than following the shell of the simplex, like the simplex algorithm would do, the solution is kicked back inside the simplex, on a plane where the solutions are no worse, in the direction of the normal vector. The solution moves halfway between the starting point and the next constraint in this direction. It's no worse than before, but now it's more "inside" the simplex, where is has a shot at taking long leaps towards the optimum.
Though the probability of hitting an intersection of more than one hyperplane is 0, if one gets close enough to multiple hyperplane within a certain tolerance, the average of the normals may be taken.
This can be generalized to any convex objective function by following geodesics on the levels of the function. For quadratic programming in particular, one rotates the solution towards the inside of the simplex.
Questions:
Does this algorithm have a name or fall within a category of linear-programming algorithms?
Does it have an obvious flaw that I'm overlooking?
I am pretty sure this doesn't work, unless I miss something: your algorithm will not start moving in most cases.
Assume your variable x is taken in R^n.
A polyhedron of the form Ax <= b is contained in a 'maximal' affine subspace P of dimension p <= n, and usually p is much smaller than n (you will have a lot of equality constraints, which can be implicit or explicit: you cannot assume that the expression of P is simple to obtain from A and b).
Now assume you can find an initial point x_0 (which is far from being obvious, btw) ; there is very little chance that the direction of the gradient c is a feasible direction. You would need to consider the projection of the direction c on P, and this is very difficult to do in practice (how would you compute such projection?).
Then, what you want in your step (3) is not the normal direction of the hyperplane you reached, but again its projection on P (visualize the polyedron as a 2d polyedron embedded in a 3d space can help).
There is a very good reason why barrier functions are used in the interior point methods: it is very difficult to describe in practice the geometry of the high-dimension convex sets from the constraints (even simple ones like polyedrons), and things that "seems obvious" when you draw a picture on a piece of paper will not usually work when the dimension of the polyedron increases.
One last point is that your algorithm would not give the exact solution, whereas the simplex does in theory (and I read somewhere it can be done in practice by working with exact rational numbers).
read up on interior point methods: http://en.wikipedia.org/wiki/Interior_point_method
this approach can have nice theoretical properties, but the algorithm performance can tend to tail off in practice

Linear Least Squares Fit of Sphere to Points

I'm looking for an algorithm to find the best fit between a cloud of points and a sphere.
That is, I want to minimise
where C is the centre of the sphere, r its radius, and each P a point in my set of n points. The variables are obviously Cx, Cy, Cz, and r. In my case, I can obtain a known r beforehand, leaving only the components of C as variables.
I really don't want to have to use any kind of iterative minimisation (e.g. Newton's method, Levenberg-Marquardt, etc) - I'd prefer a set of linear equations or a solution explicitly using SVD.
There are no matrix equations forthcoming. Your choice of E is badly behaved; its partial derivatives are not even continuous, let alone linear. Even with a different objective, this optimization problem seems fundamentally non-convex; with one point P and a nonzero radius r, the set of optimal solutions is the sphere about P.
You should probably reask on an exchange with more optimization knowledge.
You might find the following reference interesting but I would warn you
that you will need to have some familiarity with geometric algebra -
particularly conformal geometric algebra to understand the
mathematics. However, the algorithm is straight forward to implement with
standard linear algebra techniques and is not iterative.
One caveat, the algorithm, at least as presented fits both center and
radius, you may be able to work out a way to constrain the fit so the radius is constrained.
Total Least Squares Fitting of k-Spheres in n-D Euclidean Space Using
an (n+ 2)-D Isometric Representation. L Dorst, Journal of Mathematical Imaging and Vision, 2014 p1-21
Your can pull in a copy from
Leo Dorst's researchgate page
One last thing, I have no connection to the author.
Short description of making matrix equation could be found here.
I've seen that WildMagic Library uses iterative method (at least in version 4)
You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus): see the appendix of the open access paper of Cerisier et al. in J. Comput. Biol. 24(11), 1134-1137 (2017), https://doi.org/10.1089/cmb.2017.0061
It works when the data points are weighted (it works even for continuous distributions; as a by-product, when d=1, a well-known inequality is retrieved: the kurtosis is always greater than the squared skewness plus 1).
Difficult to do this without iteration.
I would proceed as follows:
find the overall midpoint, by averaging (X,Y,Z) coords for all points
with that result, find the average distance Ravg to the midpoint, decide ok or proceed..
remove points from your set with a distance too far from Ravg found in step 2
go back to step 1 (average points again, yielding a better midpoint)
Of course, this will require some conditions for (2) and (4) that depends on the quality of your points cloud !
Ian Coope has an interesting algorithm in which he linearized the problem using a change of variable. The fit is quite robust, and although it slightly redefines the condition of optimality, I've found it to be generally visually better, especially for noisy data.
A preprint of Coope's paper is available here: https://ir.canterbury.ac.nz/bitstream/handle/10092/11104/coope_report_no69_1992.pdf.
I found the algorithm to be very useful, so I implemented it in scikit-guess as skg.nsphere_fit. Let's say you have an (m, n) array p, consisting of M points of dimension N (here N=3):
r, c = skg.nsphere_fit(p)
The radius, r, is a scalar and c is be an n-vector containing the center.

Algorithm for 2D Interpolation

I have two shapes which are cross sections of a channel. I want to calculate the cross section of an intermediate point between the two defined points.
What's the simplest (relatively simple?) algorithm to use in this situation?
P.S.: I came across several algorithms like natural neighbor and poisson, which seemed complex. I'm looking for a simple solution, which could be implemented quickly.
EDIT: I removed the word "Simplest" from the title since it might be misleading
This is simple:
On each cross section draw N points at evenly spaced intervals along the boundary of the cross-section.
Draw straight lines from the n-th point on cross-section 1 to the n-th point on cross-section 2.
Take off your new cross-section at the desired distance between the old cross-sections.
Simpler still:
Use one of the existing cross-sections without modification.
This second suggestion might be too simple I suppose, but I bet no-one suggests a simpler one !
EDIT following OP's comment: (too much for a re-comment)
Well, you did ask for a simple method ! I'm not sure I see the same problem with the first method as you do. If the cross sections are not too weird (probably best if they are convex polygons) and you don't do anything strange such as map the left side of one cross-section to the right side of the other (thereby forcing lots of crossing lines) then the method should produce some kind of sensible cross section. In the case you suggest of a triangle and a rectangle, suppose the triangle is sitting on its base, one vertex at the top. Map that point to, say, the top left corner of the rectangle, then proceed in the same direction (clockwise or anti-clockwise) around the boundaries of both cross-sections joining corresponding points. I don't see any crossing lines, and I see a well-defined shape at any distance between the two cross-sections.
Note there are some ambiguities about High Performance Mark's answers you will probably need to address and will define the quality of the output of his method. The most important one is, when you draw the n points on both cross-sections, what sort of correspondence do you determine between them, that is if you do it that way High Performance Mark suggested, then the order of labeling the points becomes important.
I suggest rotating (orthogonal) plane simultaneously through both cross sections, then the set of points which intersect that plane on one cross section just need to be matched to the set of points that intersect that plane on the other cross section. Hypothetically, there is no limit on the number of points in these sets, but it certainly reduces the complexity of the correspondence problem in the original situation.
Here is another try at the problem, which I think is a much better attempt.
Given the two cross-sections C_1, C_2
Place each C_i into a global reference frame with coordinate system (x,y) so that the way they are relatively situated makes sense. Split each C_i into an upper and lower curve U_i and L_i. The idea is going to be that you will want to continuously deform curve U_1 to U_2 and L_1 to L_2. (Note you can extend this method to split each C_i into m curves if you wish.)
The way to do this is as follows. For each T_i = U_i, or L_i sample n points, and determine the interpolating polynomial P{T_i}(x). As some one duly noted below, interpolating polynomials are susceptible to oscillation especially at the endpoints. Instead of the interpolating polynomial, one may instead use the least squares fit polynomial which would be much more robust. Then define the deformation of the polynomial P{U_1}(x) = a_0 + a_1 * x + ... + a_n * x^n to P{U_2}(x) = b_0 + b_1 * x + ... + b_n * x^n as Q{P{U_1},P{U_2}}(x, t) = ( t * a_0 + (1 - t ) b_0 ) + ... + (t * a_n + (1-t) * b_n ) * x^n where the deformation Q is defined over 0<=t<=1 where t defines at which point the deformation is at (i.e. at t=0 we are at U_2 and at t=1 we are at U_1 and at every other t we are at some continuous deformation of the two.)
The exact same follows for Q{P{L_1},P{L_2}}(x, t). These two deformations construct you a continuous representation between the two cross-sections which you can sample at any t.
Note all this is really doing is linearly interpolation the coefficients of the interpolation polynomials of the two pieces of both cross-sections. Note also when spliting the cross-sections you should probably put the constraint that they must be split at end points that match up otherwise you may have "holes" in your deformation.
I hope thats clear.
edit: addressed the issue of oscillation in interpolating polynomials.

What are eigen values and expansions?

What are eigen values, vectors and expansions and as an algorithm designer how can I use them?
EDIT: I want to know how YOU have used it in your program so that I get an idea. Thanks.
they're used for a lot more than matrix algebra. examples include:
the asymptotic state distribution of a hidden markov model is given by the left eigenvector associated with the eigenvalue of unity from the state transition matrix.
one of the best & fastest methods of finding community structure in a network is to construct what's called the modularity matrix (which basically is how "surprising" is a connection between two nodes), and then the signs of the elements of the eigenvector associated with the largest eigenvalue tell you how to partition the network into two communities
in principle component analysis you essentially select the eigenvectors associated with the k largest eigenvalues from the n>=k dimensional covariance matrix of your data and project your data down to the k dimensional subspace. the use of the largest eigenvalues ensures that you're retaining the dimensions that are most significant to the data, since they are the ones that have the greatest variance.
many methods of image recognition (e.g. facial recognition) rely on building an eigenbasis from known data (a large set of faces) and seeing how difficult it is to reconstruct a target image using the eigenbasis -- if it's easy, then the target image is likely to be from the set the eigenbasis describes (i.e. eigenfaces easily reconstruct faces, but not cars).
if you're in to scientific computing, the eigenvectors of a quantum hamiltonian are those states that are stable, in that if a system is in an eigenstate at time t1, then at time t2>t1, if it hasn't been disturbed, it will still be in that eigenstate. also, the eigenvector associated with the smallest eigenvalue of a hamiltonian is the ground state of a system.
Eigen vectors and corresponding eigen values are mainly used to switch between different coordinate systems. This might simplify problems and computations enormously by moving the problem sphere to from one coordinate system to another.
This new coordinates system has the eigen vectors as its base vectors, i.e. they "span" this coordinate system. Since they can be normalized, the transformation matrix from the first coordinate system is "orthonormal", that is the eigen vectors have magnitude 1 and are perpendicular to each other.
In the transformed coordinate system, a linear operation A (matrix) is pure diagonal. See Spectral Theorem, and Eigendecomposition for more information.
A quick implication is for example that you can from a general quadratic curve:
ax^2 + 2bxy + cy^2 + 2dx + 2fy + g = 0
rewrite it as
AX^2 + BY^2 + C = 0
where X and Y are counted along the direction of the eigen vectors.
Cheers !
check out http://mathworld.wolfram.com/Eigenvalue.html
Using eigen values in algorithms will need you to be proficient with the math involved.
I'm absolutely the wrong person to be talking about math: I puke on it.
cheers, jrh.
Eigen values and vectors are used in matrix computation as finding of reverse matrix. So if you need to write math code, precomputing them can speed some operations.
In short, you need them if you do matrix algebra, linear algebra etc.
Using the notation favored by physicists, if we have an operator H, then |x> is an eigenstate of H if and only if
H|x> = h|x>
where we call h the eigenvalue associated with the eigenvector |x> under H.
(Here the state of the system can be represented by a matrix, making this math isomorphic with all the other expressions already linked.)
Which brings us to the uses of these things once they have been discovered:
The full set of eigenvectors of a system under a given operator form an orthagonal spanning set for they system. This set may be a basis if there is no degeneracy. This is very useful because it allows extremely compact expressions of arbitrary (non eigen-) states of the system.

Resources