Eigenvector. Implementing Jacobi algorithm - algorithm

I am implementing Jacobi algorithms, to get eigenvectors of symmetric matrix. I don't understand why i gain different eigenvector from my applications (same result like mine here: http://fptchlx02.tu-graz.ac.at/cgi-bin/access.com?c1=0000&c2=0000&c3=0000&file=0638) and diffrent from Wolfram Aplha: http://www.wolframalpha.com/input/?i=eigenvector%7B%7B1%2C2%2C3%7D%2C%7B2%2C2%2C1%7D%2C%7B3%2C1%2C1%7D%7D
Example matrix:
1 2 3
2 2 1
3 1 1
My Result:
0.7400944496522529, 0.6305371413491765, 0.23384421945632447
-0.20230251371232585, 0.5403584533063043, -0.8167535949636785
-0.6413531776951003, 0.5571668060588798, 0.5274763043839444
Result from WA:
1.13168, 0.969831, 1
-1.15396, 0.315431, 1
0.443327, -1.54842, 1
I expect that solution is trivial, but i can't find it. I've asked this question on mathoverflow and they pointed me to this site.

Eigenvectors of a matrix are not unique, and there are multiple possible decompositions; in fact, only eigenspaces can be defined uniquely. Both results that you are receiving are valid. You can easily see that by asking Wolfram Alpha to orthogonalize the second matrix. Run the following query:
Orthogonalize[{{1.13168, 0.969831, 1.}, {-1.15396, 0.315431, 1.}, {0.443327, -1.54842, 1.}}]
to obtain
0.630537 0.540358 0.557168
-0.740094 0.202306 0.641353
0.233844 -0.816754 0.527475
Now you can see that your algorithm returns a correct result. First, the matrix is transposed: WA gave you row vectors, and your algorithm returns them in columns. Then, the first vector is multiplied by a -1, but any eigenvector can be multiplied by a non-zero constant to yield a valid eigenvector. Otherwise, the results perfectly match.
You may also find the following Mathematics StackExchange answer helpful: Are the eigenvectors of a real symmetric matrix always an orthonormal basis without change?

Related

Matrix Chain Multiplication Dynamic Programming

Assume that multiplying a matrix G1 of dimension p×q with another matrix G2 of dimension q×r requires pqr scalar multiplications. Computing the product of n matrices G1G2G3 ….. Gn can be done by parenthesizing in different ways. Define GiGi+1 as an explicitly computed pair for a given paranthesization if they are directly multiplied. For example, in the matrix multiplication chain G1G2G3G4G5G6 using parenthesization (G1(G2G3))(G4(G5G6)), G2G3 and G5G6 are only explicitly computed pairs.
Consider a matrix multiplication chain F1F2F3F4F5, where matrices F1,F2,F3,F4 and F5 are of dimensions 2×25,25×3,3×16,16×1 and 1×1000, respectively. In the parenthesization of F1F2F3F4F5 that minimizes the total number of scalar multiplications, the explicitly computed pairs is/are
F1F2 and F3F4 only
F2F3 only
F3F4 only
F2F3 and F4F5 only
=======================================================================
My approach - I want to solve this under one minute, but the only way I know is that to use Bottom up Dynamic Approach by making a table and the other thing I can conclude is we should multiply with F5 at last because it has 1000 in it's dimension.So, please how to develop fast intuition for this kind of question!
======================================================================
Correct answer is F3F4
The most important thing to note is the dimension 1×1000. You better watch out for it if you want to minimize the multiplications. OK, now we do know what we are looking for is basically multiply a small number with 1000.
Carefully examining if we go with F4F5, we would be multiplying 16x1x1000. But computing F3F4 first , the result matrix has dimension 3x1. So going with F3F4 we are able to get small numbers like 3,1 . So , no way im going with F4F5.
By similar logic I would not go with F2F3 and loose the smaller 3 and get bigger 25 and 16 to be later used with 1000.
OK, for F1F2, you can quickly find that (F1F2)(F3F4) is not better than
(F1(F2(F3F4))) . So the answer is F3F4

fast way to invert or dot kxnxn matrix

Is there a fast way to calculate the inverse of a kxnxn matrix using numpy (the inverse being calculated at each k-slice)? In other words, is there a way to vectorize the following code:
>>>from numpy.linalg import inv
>>>a-random(4*2*2).reshape(4,2,2)
>>>b=a.copy()
>>>for k in range(len(a)):
>>> b[k,:,:] = inv(a[k,:,:])
First about getting the inverse. I have looked into both np.linalg.tensorinv and np.linalg.tensorsolve.
I think unfortunately tensorinv will not give you what you want. It needs the array to be "square". This excludes what you want to do, because their definition of square is that np.prod(a[:i]) == np.prod(a[i:]) where i is 0, 1 or 2 (one of the axes of the array in general); this can be given as the third argument ind of tensorinv. This means that if you have a general array of NxN matrices of length M, you need to have e.g. (for i = 1) NxN == NxM, which is not true in general (it actually is true in your example, but it does not give the correct answer anyway).
Now, maybe something is possible with tensorsolve. This would however involve some heavy construction work on the a matrix-array before it is passed as the first argument to tensorsolve. Because we would want b to be the solution of the "matrix-array equation" a*b = 1 (where 1 is an array of identity matrices) and 1 would have the same shape as a and b, we cannot simply supply the a you defined above as the first argument to tensorsolve. Rather, it needs to be an array with shape (M,N,N,M,N,N) or (M,N,N,N,M,N) or (M,N,N,N,N,M). This is necessary, because tensorsolve would multiply with b over these last three axes and also sum over them so that the result (the second argument to the function) is again of shape (M,N,N).
Then secondly, about dot products (your title suggests that's also part of your question). This is very doable. Two options.
First: this blog post by James Hensman gives some good suggestions.
Second: I personally like using np.einsum better for clarity. E.g.:
a=np.random.random((7,2,2))
b=np.random.random((7,2,2))
np.einsum('ijk,ikl->ijl', a,b)
This will matrix-multiply all 7 "matrices" in arrays a and b. It seems to be about 2 times slower than the array-method from the blog post above, but it's still about 70 times faster than using a for loop as in your example. In fact, with larger arrays (e.g. 10000 5x5 matrices) the einsum method seems to be slightly faster (not sure why).
Hope that helps.

Is there a fast way to invert a matrix in Matlab?

I have lots of large (around 5000 x 5000) matrices that I need to invert in Matlab. I actually need the inverse, so I can't use mldivide instead, which is a lot faster for solving Ax=b for just one b.
My matrices are coming from a problem that means they have some nice properties. First off, their determinant is 1 so they're definitely invertible. They aren't diagonalizable, though, or I would try to diagonlize them, invert them, and then put them back. Their entries are all real numbers (actually rational).
I'm using Matlab for getting these matrices and for this stuff I need to do with their inverses, so I would prefer a way to speed Matlab up. But if there is another language I can use that'll be faster, then please let me know. I don't know a lot of other languages (a little but of C and a little but of Java), so if it's really complicated in some other language, then I might not be able to use it. Please go ahead and suggest it, though, in case.
I actually need the inverse, so I can't use mldivide instead,...
That's not true, because you can still use mldivide to get the inverse. Note that A-1 = A-1 * I. In MATLAB, this is equivalent to
invA = A\speye(size(A));
On my machine, this takes about 10.5 seconds for a 5000x5000 matrix. Note that MATLAB does have an inv function to compute the inverse of a matrix. Although this will take about the same amount of time, it is less efficient in terms of numerical accuracy (more info in the link).
First off, their determinant is 1 so they're definitely invertible
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
My matrices are coming from a problem that means they have some nice properties.
Of course, there are other more efficient algorithms that can be employed if your matrix is sparse or has other favorable properties. But without any additional info on your specific problem, there is nothing more that can be said.
I would prefer a way to speed Matlab up
MATLAB uses Gauss elimination to compute the inverse of a general matrix (full rank, non-sparse, without any special properties) using mldivide and this is Θ(n3), where n is the size of the matrix. So, in your case, n=5000 and there are 1.25 x 1011 floating point operations. So on a reasonable machine with about 10 Gflops of computational power, you're going to require at least 12.5 seconds to compute the inverse and there is no way out of this, unless you exploit the "special properties" (if they're exploitable)
Inverting an arbitrary 5000 x 5000 matrix is not computationally easy no matter what language you are using. I would recommend looking into approximations. If your matrices are low rank, you might want to try a low-rank approximation M = USV'
Here are some more ideas from math-overflow:
https://mathoverflow.net/search?q=matrix+inversion+approximation
First suppose the eigen values are all 1. Let A be the Jordan canonical form of your matrix. Then you can compute A^{-1} using only matrix multiplication and addition by
A^{-1} = I + (I-A) + (I-A)^2 + ... + (I-A)^k
where k < dim(A). Why does this work? Because generating functions are awesome. Recall the expansion
(1-x)^{-1} = 1/(1-x) = 1 + x + x^2 + ...
This means that we can invert (1-x) using an infinite sum. You want to invert a matrix A, so you want to take
A = I - X
Solving for X gives X = I-A. Therefore by substitution, we have
A^{-1} = (I - (I-A))^{-1} = 1 + (I-A) + (I-A)^2 + ...
Here I've just used the identity matrix I in place of the number 1. Now we have the problem of convergence to deal with, but this isn't actually a problem. By the assumption that A is in Jordan form and has all eigen values equal to 1, we know that A is upper triangular with all 1s on the diagonal. Therefore I-A is upper triangular with all 0s on the diagonal. Therefore all eigen values of I-A are 0, so its characteristic polynomial is x^dim(A) and its minimal polynomial is x^{k+1} for some k < dim(A). Since a matrix satisfies its minimal (and characteristic) polynomial, this means that (I-A)^{k+1} = 0. Therefore the above series is finite, with the largest nonzero term being (I-A)^k. So it converges.
Now, for the general case, put your matrix into Jordan form, so that you have a block triangular matrix, e.g.:
A 0 0
0 B 0
0 0 C
Where each block has a single value along the diagonal. If that value is a for A, then use the above trick to invert 1/a * A, and then multiply the a back through. Since the full matrix is block triangular the inverse will be
A^{-1} 0 0
0 B^{-1} 0
0 0 C^{-1}
There is nothing special about having three blocks, so this works no matter how many you have.
Note that this trick works whenever you have a matrix in Jordan form. The computation of the inverse in this case will be very fast in Matlab because it only involves matrix multiplication, and you can even use tricks to speed that up since you only need powers of a single matrix. This may not help you, though, if it's really costly to get the matrix into Jordan form.

OpenCV SVD Matrix format

I currently have a set of 2D Cartesian coordinates e.g. {(1,3), (2,2), (3,4)}
Which will be put into a 2D array, to perform SVD properly would the matrix be put together such that the coordinates form the columns or the rows e.g.
1 3
2 2
3 4
or
1 2 3
3 2 4
I have been doing a little trial and error comparing to examples of SVD I have found online, the resulting matrix usually seems to be negated, with some of the values shuffled around.
To clarify further if I had a matrix E which was MxN as shown here http://upload.wikimedia.org/wikipedia/commons/b/bb/Matrix.svg
To define the matrix as a 2D array would it be Array[M][N] or Array[N][M]
I am assuming this actually matters due to matrix arithmetic not being commutative? Can anyone actually verify this?
This link describes how to create a matrix from a set of vectors
In order to create a matrix by
compounding vector like structures we
need to do two things to the 'inner
vector':
We need to take the transpose so that
it is a row rather than a column.
We need a multiplication operation which
will make it a field.
However this does not clarify the standards used for OpenCV and SVD.

Confirm I understand matrix determinants

Basically I have been trying to forge an understanding of matrix maths over the last few weeks and after reading (and re-reading) many maths heavy articles and documentation I think I have an adequate understanding, but I just wanted to make sure!
The definitions i have ended up with are:
/*
Minor
-----
-A determinant of a sub matrix
-The sub matrix used to calculate a minor can be obtained by removing more then one row/column from the original matrix
-First minors are minors of a sub matrix where only the row and column of a single element have been removed
Cofactor
--------
-The (signed) minor of a single element from a matrix
ie. the minor of element 2,3 is the determinant of the submatrix, of the matrix, defined by removing row 2 and column 3
Determinant
-----------
-1. Choose any single row or column from a Matrix.
2. For each element in the row/column, multiply the value of the element against the First Minor of that element.
3. This result is then multiplied by (-1 raised to the power of the elements row index + its column index) which will give the result of step 2 a sign.
4. You then simply sum all these results to get the determinant (a real number) for the Matrix.
*/
Please let me know of any holes in my understanding?
Sources
http://en.wikipedia.org /Cofactor_(linear_algebra) & /Minor_(linear_algebra) & /Determinant
http://easyweb.easynet.co.uk/~mrmeanie/matrix/matrices.htm
http://www.geometrictools.com/Documentation/LaplaceExpansionTheorem.pdf (the most helpful)
Geometric tools for computer graphics (this may have missing pages, i have the full copy)
Sounds like you understand determinants -- now go forth and write code! Try writing a solver for simultaneous linear equations in 3 or more variables, using Cramer's Rule.
Since you tagged this question 3dgraphics, matrix and vector multiplication might be a good area to explore next. They come up everywhere in 3d graphics programming.

Resources