Eigenvectors with dgeev in Maxima - wolfram-mathematica

I am finding eigenvectors with the function dgeev in maxima and comparing them with eigenvectors I find from the same matrix but using mathematica.
In the odd column right eigenvectors in maxima are the same as in mathematica but not the even. And the even column left eigenvectors are the same in mathematica but not the odd. if I were to take odd column right eigenvectors and even column eigenvectors I would have what mathematica prints out.
I do not fully understand what is going on here, does anyone have an explanation?
Thanks,
Ben

Eigenvectors aren't unique, so the actual result depends on conventions for normalization and ordering. A better way to check the result is to verify that the difference between the original matrix and the product of the eigenvectors and eigenvalues is zero.

Related

Can 2D transpose convolution be represented as a Toeplitz matrix multiplication?

Can a 2D transpose convolution operation be represented as a matrix multiplication with the Toeplitz matrix, as can be done for a normal convolution?
I want to generalise some ideas from a dense network to a convolutional network. For normal convolutions, this is not a problem, as they can be represented as matrix multiplications with the Toeplitz matrix. But I couldn't find a clear mathematical formulation of transposed convolution, so I am not sure about this case.
I was looking for a mathematical answer, so should have probably asked somewhere else, anyways I think my latex write-up is correct and answers the question:
formula transposed convolution

How to solve the SVD problem with the constraint that the subspaces spanned the first K left and right vectors are the same?

I'm trying to solve the problem of finding the K leading 'singular vectors' of a matrix. Different from the standard SVD problem, I need the subspace spanned by the left and right 'singular vectors' to be the same. The objective function I consider is
my objective function
which is equivalent to
equivalent objective function
Here M is a matrix with real valued entries that is not necessarily symmetric. K is smaller than n and $\| \|_*$ is representing the nuclear norm.

solving a singular matrix

I am trying to write a little unwrapper for meshes. This uses a finite-element-method to solve for minimal linear stress between flattened and the raw surface. At the moment there are some vertices pinned to get a result. Without this the triangles are rotated and translated randomly...
But as this pinning isn't necessary for the problem, the better solution would be to directly solve the singular matrix. Petsc does provide some methodes to solve a singular system by providing some information on the nullspace. http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf#section.4.6 I wonder if there is any alternative for this in Eigen. If not, are there any other possibilities to solve this problem without fixing/pinning vertices.
thanks,
nice regards
see also this link for further informaton:
dev history
Eigen provides an algorithm for SVD decomposition: Jacobi SVD.
The SVD decomposition gives the null-space. Following the notations of the wikipedia article, let M = U D V be the SVD decomposition of M, where D is a diagonal matrix of the singular values. Then, from the Range, null space and rank:
The right-singular vectors [V] corresponding to vanishing singular values of M span the null space of M

Eigenvalues of large symmetric matrices

When I try to compute the eigenvalues of the adjacency matrix of a very large graph I get, what can be charitably described as, garbage. In particular, since the graph is four-regular, the eigenvalues should be in $[-4, 4]$ but they are visibly not. I used Matlab (via MATLink), and got the same problems, so this is clearly an issue that transcends mathematica. The question is: what is the best way to deal with it? I am sure MATLAB and Mathematica use the venerable EISPAK code, so there may be something newer/better
Eigenvalue methods for dense matrices usually proceed by first transforming the matrix into Hessenberg form, here this would result in a tridiagonal matrix. After that some variant of the shifted QR algorithm, like bulge-chasing, is applied to iteratively reduce the non-diagonal elements, splitting the matrix at positions where these become small enough.
But what I would like to draw the attention to is that first step and its structure destroying consequences. It is, for instance, not guaranteed that the tri-diagonal matrix is still symmetrical. This applies also to all further steps if they are not explicitly tailored for symmetric matrices.
But what is much more relevant here is that this step ignores all connectivity or non-connectivity of the graph and potentially connects all nodes, albeit with very small weights, when the transformation is reversed.
Each of the m connected component of the graph gives one eigenvalue 4 with an eigenvector that is 1 at the nodes of the components and 0 else. These eigenspaces have all dimension 1. Any small perturbation of the matrix first removes that separation and joins them in an eigenspace of dimension m and then perturbs this as a multiple eigenvalue. This then can result in an approximately regular m pointed star in the complex plane of radius 4*(1e-15)^(1/m) around the original value 4. Even for medium sized m this gives a substantial deviation from the true eigenvalue.
So in summary, use a sparse method as these usually will first re-order the matrix to be as diagonal as possible, which should give a block-diagonal structure according to the components. Then the eigenvalue method will automatically work on all blocks separately, avoiding the above described mixing. And if possible, use a method for symmetric matrices or set a corresponding option/flag if it exists.

How to find the closest positive semi-definite matrix for an non-positive semi-definite matrix?

Here,I have an matrix,e.g,A,where A=[1 0.9 0.5;0.9 1 0.9;0.5 0.9 1],how to caculate its closest positive semi-definite matrix ? Is there any comand or algorithm ?
The closest positive semi-definite matrix is obtained using the polar decomposition. The jury is still out if the computation of this decomposition using the SVD or direct iterative methods is faster.
closest in what sense?
Typically, the best way to think about this is in eigenspace. If you don't have any constraints on the eigenvalues, I am not sure there is any way to make sense of your question. Sure, there exist other matrices which are semi positive definite; but in what sense are they still related to your original matrix?
But in case you have all real eigenvalues, things become a little more tangible. You can translate the eigenvalues along the real axis by adding to the diagonal, for instance.
Also, in practice one often deals with matrices which are SPD up a scaling of rows/columns; finding that scaling shouldn't be too hard, if it exists; but that scaling should then typically be available from the surrounding code. (a mass matrix of sorts).

Resources