I am trying to write a little unwrapper for meshes. This uses a finite-element-method to solve for minimal linear stress between flattened and the raw surface. At the moment there are some vertices pinned to get a result. Without this the triangles are rotated and translated randomly...
But as this pinning isn't necessary for the problem, the better solution would be to directly solve the singular matrix. Petsc does provide some methodes to solve a singular system by providing some information on the nullspace. http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf#section.4.6 I wonder if there is any alternative for this in Eigen. If not, are there any other possibilities to solve this problem without fixing/pinning vertices.
thanks,
nice regards
see also this link for further informaton:
dev history
Eigen provides an algorithm for SVD decomposition: Jacobi SVD.
The SVD decomposition gives the null-space. Following the notations of the wikipedia article, let M = U D V be the SVD decomposition of M, where D is a diagonal matrix of the singular values. Then, from the Range, null space and rank:
The right-singular vectors [V] corresponding to vanishing singular values of M span the null space of M
Related
When I try to compute the eigenvalues of the adjacency matrix of a very large graph I get, what can be charitably described as, garbage. In particular, since the graph is four-regular, the eigenvalues should be in $[-4, 4]$ but they are visibly not. I used Matlab (via MATLink), and got the same problems, so this is clearly an issue that transcends mathematica. The question is: what is the best way to deal with it? I am sure MATLAB and Mathematica use the venerable EISPAK code, so there may be something newer/better
Eigenvalue methods for dense matrices usually proceed by first transforming the matrix into Hessenberg form, here this would result in a tridiagonal matrix. After that some variant of the shifted QR algorithm, like bulge-chasing, is applied to iteratively reduce the non-diagonal elements, splitting the matrix at positions where these become small enough.
But what I would like to draw the attention to is that first step and its structure destroying consequences. It is, for instance, not guaranteed that the tri-diagonal matrix is still symmetrical. This applies also to all further steps if they are not explicitly tailored for symmetric matrices.
But what is much more relevant here is that this step ignores all connectivity or non-connectivity of the graph and potentially connects all nodes, albeit with very small weights, when the transformation is reversed.
Each of the m connected component of the graph gives one eigenvalue 4 with an eigenvector that is 1 at the nodes of the components and 0 else. These eigenspaces have all dimension 1. Any small perturbation of the matrix first removes that separation and joins them in an eigenspace of dimension m and then perturbs this as a multiple eigenvalue. This then can result in an approximately regular m pointed star in the complex plane of radius 4*(1e-15)^(1/m) around the original value 4. Even for medium sized m this gives a substantial deviation from the true eigenvalue.
So in summary, use a sparse method as these usually will first re-order the matrix to be as diagonal as possible, which should give a block-diagonal structure according to the components. Then the eigenvalue method will automatically work on all blocks separately, avoiding the above described mixing. And if possible, use a method for symmetric matrices or set a corresponding option/flag if it exists.
Hi people i have 2 question related to Decasteljau algorithm,they are more of a general questions,but if im right it could help solving many problems.Here it is:
We have some sufrace: Ʃ(i=0,n) Ʃ(j=o,m) Bi,n(U),Bi,m(v) Pi,j analysis that i have found says that first we take some value for one parameter u=uo,then we itterate other parametar v -> 1 get a set of points,then increment u by one etc....for loop inside for loop in code language.My question is can we fix one parameter U=Uo for what ever value,and then just compute points on for parameter v?Because all points that are on one curve are also on the surface,and if distance between curves approaches to zero (which itteration really is) we can apply DeCasteljau algorithm only to one set of curves itterating only one parameter.Or i got something wrong?:)
Second question is i havent really figured out what do we really need DeCasteljau algorithm for,unless we are drawing curves by hand?If we know order of the curve we can easily form Bernstain polynoms for that curve order and compute point for given value of parametar.Because when you unwrap Decasteljau what you get is Bernstain polynom?
So like i said,please help have i got i wrong?
Yes you can fix one parameter (say U) and change the other (V) to generate an iso-U curve.
You can see the things as if you had an NxM array of control points. If you perform a first interpolation on U (actually M interpolations involving N control points), you get M new control points that define a Bezier curve. and by varying U, the curve moves in space.
The De Casteljau's algorithm is used for convenience: it computes the interpolant by using a cascade of linear interpolations between the control points. Direct evaluation of the Bernstein polynomials would require the precomputation of the coefficients, and would not be faster, even when implemented by Horner's scheme, and can be numerically less stable.
The De Casteljau's algorithm is also appreciated for its geometrical interpretation, and for its connection with the subdivision process: if you want to build the control points for just a part of a Bezier curve, De Calsteljau's provides them.
Here,I have an matrix,e.g,A,where A=[1 0.9 0.5;0.9 1 0.9;0.5 0.9 1],how to caculate its closest positive semi-definite matrix ? Is there any comand or algorithm ?
The closest positive semi-definite matrix is obtained using the polar decomposition. The jury is still out if the computation of this decomposition using the SVD or direct iterative methods is faster.
closest in what sense?
Typically, the best way to think about this is in eigenspace. If you don't have any constraints on the eigenvalues, I am not sure there is any way to make sense of your question. Sure, there exist other matrices which are semi positive definite; but in what sense are they still related to your original matrix?
But in case you have all real eigenvalues, things become a little more tangible. You can translate the eigenvalues along the real axis by adding to the diagonal, for instance.
Also, in practice one often deals with matrices which are SPD up a scaling of rows/columns; finding that scaling shouldn't be too hard, if it exists; but that scaling should then typically be available from the surrounding code. (a mass matrix of sorts).
I'm looking for an algorithm to find the best fit between a cloud of points and a sphere.
That is, I want to minimise
where C is the centre of the sphere, r its radius, and each P a point in my set of n points. The variables are obviously Cx, Cy, Cz, and r. In my case, I can obtain a known r beforehand, leaving only the components of C as variables.
I really don't want to have to use any kind of iterative minimisation (e.g. Newton's method, Levenberg-Marquardt, etc) - I'd prefer a set of linear equations or a solution explicitly using SVD.
There are no matrix equations forthcoming. Your choice of E is badly behaved; its partial derivatives are not even continuous, let alone linear. Even with a different objective, this optimization problem seems fundamentally non-convex; with one point P and a nonzero radius r, the set of optimal solutions is the sphere about P.
You should probably reask on an exchange with more optimization knowledge.
You might find the following reference interesting but I would warn you
that you will need to have some familiarity with geometric algebra -
particularly conformal geometric algebra to understand the
mathematics. However, the algorithm is straight forward to implement with
standard linear algebra techniques and is not iterative.
One caveat, the algorithm, at least as presented fits both center and
radius, you may be able to work out a way to constrain the fit so the radius is constrained.
Total Least Squares Fitting of k-Spheres in n-D Euclidean Space Using
an (n+ 2)-D Isometric Representation. L Dorst, Journal of Mathematical Imaging and Vision, 2014 p1-21
Your can pull in a copy from
Leo Dorst's researchgate page
One last thing, I have no connection to the author.
Short description of making matrix equation could be found here.
I've seen that WildMagic Library uses iterative method (at least in version 4)
You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus): see the appendix of the open access paper of Cerisier et al. in J. Comput. Biol. 24(11), 1134-1137 (2017), https://doi.org/10.1089/cmb.2017.0061
It works when the data points are weighted (it works even for continuous distributions; as a by-product, when d=1, a well-known inequality is retrieved: the kurtosis is always greater than the squared skewness plus 1).
Difficult to do this without iteration.
I would proceed as follows:
find the overall midpoint, by averaging (X,Y,Z) coords for all points
with that result, find the average distance Ravg to the midpoint, decide ok or proceed..
remove points from your set with a distance too far from Ravg found in step 2
go back to step 1 (average points again, yielding a better midpoint)
Of course, this will require some conditions for (2) and (4) that depends on the quality of your points cloud !
Ian Coope has an interesting algorithm in which he linearized the problem using a change of variable. The fit is quite robust, and although it slightly redefines the condition of optimality, I've found it to be generally visually better, especially for noisy data.
A preprint of Coope's paper is available here: https://ir.canterbury.ac.nz/bitstream/handle/10092/11104/coope_report_no69_1992.pdf.
I found the algorithm to be very useful, so I implemented it in scikit-guess as skg.nsphere_fit. Let's say you have an (m, n) array p, consisting of M points of dimension N (here N=3):
r, c = skg.nsphere_fit(p)
The radius, r, is a scalar and c is be an n-vector containing the center.
What are eigen values, vectors and expansions and as an algorithm designer how can I use them?
EDIT: I want to know how YOU have used it in your program so that I get an idea. Thanks.
they're used for a lot more than matrix algebra. examples include:
the asymptotic state distribution of a hidden markov model is given by the left eigenvector associated with the eigenvalue of unity from the state transition matrix.
one of the best & fastest methods of finding community structure in a network is to construct what's called the modularity matrix (which basically is how "surprising" is a connection between two nodes), and then the signs of the elements of the eigenvector associated with the largest eigenvalue tell you how to partition the network into two communities
in principle component analysis you essentially select the eigenvectors associated with the k largest eigenvalues from the n>=k dimensional covariance matrix of your data and project your data down to the k dimensional subspace. the use of the largest eigenvalues ensures that you're retaining the dimensions that are most significant to the data, since they are the ones that have the greatest variance.
many methods of image recognition (e.g. facial recognition) rely on building an eigenbasis from known data (a large set of faces) and seeing how difficult it is to reconstruct a target image using the eigenbasis -- if it's easy, then the target image is likely to be from the set the eigenbasis describes (i.e. eigenfaces easily reconstruct faces, but not cars).
if you're in to scientific computing, the eigenvectors of a quantum hamiltonian are those states that are stable, in that if a system is in an eigenstate at time t1, then at time t2>t1, if it hasn't been disturbed, it will still be in that eigenstate. also, the eigenvector associated with the smallest eigenvalue of a hamiltonian is the ground state of a system.
Eigen vectors and corresponding eigen values are mainly used to switch between different coordinate systems. This might simplify problems and computations enormously by moving the problem sphere to from one coordinate system to another.
This new coordinates system has the eigen vectors as its base vectors, i.e. they "span" this coordinate system. Since they can be normalized, the transformation matrix from the first coordinate system is "orthonormal", that is the eigen vectors have magnitude 1 and are perpendicular to each other.
In the transformed coordinate system, a linear operation A (matrix) is pure diagonal. See Spectral Theorem, and Eigendecomposition for more information.
A quick implication is for example that you can from a general quadratic curve:
ax^2 + 2bxy + cy^2 + 2dx + 2fy + g = 0
rewrite it as
AX^2 + BY^2 + C = 0
where X and Y are counted along the direction of the eigen vectors.
Cheers !
check out http://mathworld.wolfram.com/Eigenvalue.html
Using eigen values in algorithms will need you to be proficient with the math involved.
I'm absolutely the wrong person to be talking about math: I puke on it.
cheers, jrh.
Eigen values and vectors are used in matrix computation as finding of reverse matrix. So if you need to write math code, precomputing them can speed some operations.
In short, you need them if you do matrix algebra, linear algebra etc.
Using the notation favored by physicists, if we have an operator H, then |x> is an eigenstate of H if and only if
H|x> = h|x>
where we call h the eigenvalue associated with the eigenvector |x> under H.
(Here the state of the system can be represented by a matrix, making this math isomorphic with all the other expressions already linked.)
Which brings us to the uses of these things once they have been discovered:
The full set of eigenvectors of a system under a given operator form an orthagonal spanning set for they system. This set may be a basis if there is no degeneracy. This is very useful because it allows extremely compact expressions of arbitrary (non eigen-) states of the system.