As the subject said, does Eigen support leading eigenvector calculation using power iteration? I cannot find API info about leading eigenvector calculation.
Thanks a lot
Related
I need to compute the following matrices:
M = XSX^T
and
V = XSy
what I'd like to know is the more efficient implementation using blas, knowing that S is a symmetric and definite positive matrix of dimension n, X has m rows and n columns while y is a vector of length n.
My implementation is the following:
I compute A = XS using dsymm and then with dgemm is obtained M=AX^T while dgemv is used to obtain V=Ay.
I think that at least M can be computed in a more efficient way since I know that M is symmetric and definite positive.
Your code is the best BLAS can do for you. There is no BLAS operation, that can exploit the fact that M is symmetric.
You are right though you'd technically only need to compute the upper diagonal part of the gemm product and then copy the strictly upper diagonal part to the lower diagonal part. But there is no routine for that.
May I inquire about the sizes? And may I also inspire some other sources for performance gains: Own build of your BLAS implementation, comparison with MKL, ACML, OpenBLAS, ATLAS. You could obviously code your own version that would use AVX, FMA intrinsics. You should be able to do better that some generalised library. Also what is the precision of your floating point variable?
I seriously doubt that you might gain too much by coding it yourself anyway. But what I would definitely suggest is converting everything to floats and testing if float precision is not giving you the same result with significant speed up in compute time. Very seldom have I seen such cases, which were more in the ODE solving domain and numeric integration of nasty functions.
But you did not address my question regarding the BLAS implementation and machine type.
Again, the optimisation beyond this point is not possible without more skills :(. But seriously, don't be to worried about this. There is a reason, why BLAS does not the optimisation you ask for. It might not be worth the hassle. Go with your solution.
And don't forget to investgate the use of floats rather than double. On R convert everything to float. For the Lapack commands use only sgemX
Without knowing the detail of your problem, it can be useful to recognize the zeros in the matrices. Partitioning the matrices to achieve this can provide significant benefits. Is M the sum of many XSX' sub matrices ?
For V = XSy, where y is a vector and X and S are matrices, calculating S.y then X.(Sy) should be better, unless X.S is a necessary calculation for M.
I am a student from Germany, currently doing the master thesis. In my master thesis, I am writing a Fortran code in code blocks. In my code, I am using some of the LAPACK functions.
I want help regarding adding LAPACK library in code block software. I searched a lot on the internet but I couldn't find anything. It'll be better if you provide me all the source links extension of the previous question.
In my code, I need to solve the following system of equation, {K}{p} = {m}
Where
{K} = Matrix
{p} = vector
{m} = vector
I have all the elements of vector {m} and matrix {K} computed and I have some known values of vector {p}. It's boundary value problem.
Now I want to find out only unknown values of elements of a vector {p}.
Which function should I use?
I went through the LAPACK manual available online but couldn't find.
Well you did not go through the documentation so thoroughly :) You are looking for solvers of a linear equation.
http://www.netlib.org/lapack/lug/node38.html
Please specify a little more about complex/real double/float. ill posed? Over or under determined or quadratic? If quadratic then symmetric or upper triangular? Banded?
There is a hord of different algorithms one would consider. Runtime / stability / convergence behaviour are very different dependent on K. The most stable would be [x]gelsd. It's a divide and conquer algorithm via SVD and gives you with proper conditioning the moore penrose generalised inverse. But it is by far the slowest algorithm too.
btw, http://www.netlib.org/lapack/lug/node27.html, outlines all general solvers.
If you already have some values of your p, you would like to go a different route than a straight forward inversion. You are a lot better off, if you use an iterative method like conjugate gradient least squares problem with regularization. This is discussed in length in Stoer, Bullirsch, Introduction to Numerical Analysis, Chapter 8.7 (The Conjugate-gradient Method of Heestens and Stiefel).
There are multiple implementations online. One you will find in a library I wrote during my PhD: https://github.com/kvahed/codeare/blob/master/src/optimisation/CGLS.hpp
I stumbled upon something, which I consider very strange.
As an example consider the code
A = reshape(1:6, 3,2)
A/[1 1]
which gives
3×1 Array{Float64,2}:
2.5
3.5
4.5
As I understand, in general such division gives the weighted average of columns, where each weight is inversely proportional to the corresponding element of the vector.
So my question is, why is it defined such way?
What is the mathematical justification of this definition?
It's the minimum error solution to |A - v*[1 1]|₂ – which, being overconstrained, has no exact solution in general (i.e. value v such that the norm is precisely zero). The behavior of / and \ is heavily overloaded, solving both under and overconstrained systems by a variety of techniques and heuristics. Whether this kind of overloading is a good idea or not is debatable, but it's what people have come to expect from these operations in Matlab and Octave, and it's often quite convenient to have so much functionality available in a single operator.
Let A be an NxN matrix and b be a Nx1 column vector. Then \ solves Ax=b, and / solves xA=b.
As Stefan mentions, this is extended to underdetermined cases as the least squares solution. This is done via the QR or SVD decompositions. See the details on these algorithms to see why this is the case. Hint: the linear form of the OLS estimator can actually be written as the solution to matrix decompositions, so it's the same thing.
Now you might ask, how does it actually solve it? That's a complicated question. Essentially, it uses a matrix factorization. But which matrix factorization is used is dependent on the matrix type. The reason for this is because Gaussian elimination is O(n^3), and so treating the problem generally is usually not good. But whenever you can specialize, you can get speedups. So essentially \ (and /, which transposes and calls \) check for a bunch of special types and pick a factorization or other algorithm (LU, QR, SVD, Cholesky, etc.) based on the matrix type. The flow chart from MATLAB explains this very well. There's a lot of details here, and it gets even more details when the matrix is sparse. Also IterativeSolvers.jl should be mentioned because it's another set of algorithms for solving Ax=b.
Most applied math problems reduce down to linear algebra, with solving Ax=b being one of the most important and difficult problems, which is why there is tons of research on the subject. In fact, you can probably say that the vast majority of the field of numerical linear algebra is devoted to finding fast methods for solving Ax=b on specific matrix types. \ essentially puts all of the direct (non-iterative) methods into one convenient operator.
I am studying numerical methods from Steven C. Charpa's book. The book says "Gauss-Siedel uses less memory than Gauss-Elimination because it does not stores "0" values in matrix", however the algorithm, written in the book, handle same matrix as Gauss Elimination. I didn't understand how Gauss-Siedel uses less memory. I searched this issue on internet people say same thing but nobody explain how.
Note: I can share algorithm in book, if won't be problem about Copyrights.
The Gauss-Elimination method has to store zeros while computing. This is because in the course of elimination of lower triangular matrix, the zeros can become non-zero values. On the other hand the Gauss-Siedel method, if written to handle sparse matrices, can only operate on non-zero values.
In simple way you can say that Gauss-Siedel method works on one equation at a time, solving for i^{th} variable with non-zero coefficient, therefore it can easily skip the terms with zero coefficient.
Gauss-Elimination works on complete matrix making all the coefficients below the i^{th} coefficient zero, but in the process the coefficients in the upper triangular matrix are changed. I think that there is no easy way of writing Gauss-Elimination method for sparse matrices.
I have the following problem. There is a matrix A of size NxN, where N = 200 000. It is very sparse, there are exactly M elements in each row, where M={6, 18, 40, 68, 102} (I have 5 different scenarios), the rest are zeros.
Now I would like to get all the eigenvalues and eigenvectors of matrix A.
Problem is, I cannot put matrix A into memory as it is around 160 GB of data. What I am looking for is a software that allows nice storing of sparse matrix (without zeros, my matrix is just few MB) and then putting this stored matrix without zeros to the algorithm that calculates eigenvalues and vectors.
Can any of you recommend me a software for that?
EDIT: I found out I can reconfigure my matrix A so it becomes a band matrix. Then I could use LAPACK to get the eigenvalues and eigenvectors (concretely: http://software.intel.com/sites/products/documentation/doclib/iss/2013/mkl/mklman/GUID-D3C929A9-8E33-4540-8854-AA8BE61BB08F.htm). Problem is, I need all the vectors, and since my matrix is NxN, I cannot allow LAPACK to store the solution (all eigenvectors) in the memory. The best way would be a function that will give me first K eigenvectors, then I rerun the program to get the next K eigenvectors and so on, so I can save the results in a file.
You may try to use the SLEPC library http://www.grycap.upv.es/slepc/description/summary.htm :
"SLEPc the Scalable Library for Eigenvalue Problem Computations, is a software library for the solution of large sparse eigenproblems on parallel computers."
Read the second chapter of their users'manual, "EPS: Eigenvalue Problem Solver". They are focused on methods that preserve sparcity...but a limited number of eigenvalues and eigenvectors are computed.
I hope our matrices have good properties (positive definite for instance...).
EPSIsPositive(EPS eps,PetscBool *pos);
You may be interrested in "spectrum slicing" to compute all eigenvalues in a given interval... Or you may set a target and compute the closest eigenvalue around this target.
See http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html#EPSSetWhichEigenpairs
See examples http://www.grycap.upv.es/slepc/documentation/current/src/eps/examples/tutorials/index.html
Why do you need to compute all eigenvectors for such large matrices ?
Bye,