I am very new to programming and fortran in particular. I am using the LAPACK (Linear Algebra Package) software package for Fortran to find the eigenvalues and eigenvectors of a large symmetrical real matrix. Specifically, I calculate a scalar from each eigenvector, and I want to graph it against its associated eigenvalue.
I am using the subroutine DSYEV of LAPACK to do this. However, DSYEV outputs the eigenvalues in ascending order, and I'm not sure how it orders the eigenvectors. Is there a way to associate each eigenvector with its eigenvalue?
Edit: The official page for DSYEV is here: http://www.netlib.org/lapack/double/dsyev.f
Here is another page about it: http://www.nag.co.uk/numeric/fl/nagd...F08/f08faf.xml
They should be in the same order. You can actually check this by matrix multiplication. It is much easier and faster, than finding the eigenvectors.
Related
I have a piece of code in Fortran 90 in which I have to solve both a non-linear (for which I have to invert the Jacobian matrix) and a linear system of equations. When I say small I mean n unknowns for both operations, with n<=4. Unfortunately, n is not known a priori. What do you think is the best option?
I thought of writing explicit formulas for cases with n=1,2 and using other methods for n=3,4 (e.g. some functions of the Intel MKL libraries), for the sake of performance. Is this sensible or should I write explicit formulas for the inverse matrix also for n=3,4?
I have the following problem. There is a matrix A of size NxN, where N = 200 000. It is very sparse, there are exactly M elements in each row, where M={6, 18, 40, 68, 102} (I have 5 different scenarios), the rest are zeros.
Now I would like to get all the eigenvalues and eigenvectors of matrix A.
Problem is, I cannot put matrix A into memory as it is around 160 GB of data. What I am looking for is a software that allows nice storing of sparse matrix (without zeros, my matrix is just few MB) and then putting this stored matrix without zeros to the algorithm that calculates eigenvalues and vectors.
Can any of you recommend me a software for that?
EDIT: I found out I can reconfigure my matrix A so it becomes a band matrix. Then I could use LAPACK to get the eigenvalues and eigenvectors (concretely: http://software.intel.com/sites/products/documentation/doclib/iss/2013/mkl/mklman/GUID-D3C929A9-8E33-4540-8854-AA8BE61BB08F.htm). Problem is, I need all the vectors, and since my matrix is NxN, I cannot allow LAPACK to store the solution (all eigenvectors) in the memory. The best way would be a function that will give me first K eigenvectors, then I rerun the program to get the next K eigenvectors and so on, so I can save the results in a file.
You may try to use the SLEPC library http://www.grycap.upv.es/slepc/description/summary.htm :
"SLEPc the Scalable Library for Eigenvalue Problem Computations, is a software library for the solution of large sparse eigenproblems on parallel computers."
Read the second chapter of their users'manual, "EPS: Eigenvalue Problem Solver". They are focused on methods that preserve sparcity...but a limited number of eigenvalues and eigenvectors are computed.
I hope our matrices have good properties (positive definite for instance...).
EPSIsPositive(EPS eps,PetscBool *pos);
You may be interrested in "spectrum slicing" to compute all eigenvalues in a given interval... Or you may set a target and compute the closest eigenvalue around this target.
See http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html#EPSSetWhichEigenpairs
See examples http://www.grycap.upv.es/slepc/documentation/current/src/eps/examples/tutorials/index.html
Why do you need to compute all eigenvectors for such large matrices ?
Bye,
I need to compute the determinant of complex matrix which is symmetric. Size of matrix ranges from 500*500 to 2000*2000. Is there any subroutine for me to call? By the way, I use ifort to compile.
The easiest way would be to do an LU-decomposition as described here. I would suggest using LAPACK for this task...
This article has some code in C doing that for a real-valued symmetric matrix, so you need to exchange dspsv by zspsv to handle double-precision complex matrices.
Is there a library with the inverse function included?
I am currently working on a direction finding algorithm as part of a project. I am using the Bartlett Correlation. In the Bartlett correlation I will need to divide the numerator, which is already 3 matrix multiplications(including Hermitian Transposes), by the denominator which is the product of a matrix and its Hermitian Transpose.
I realize that in order to divide I use the inverse. I am just trying to save time and hoping that there is a library where the inverse is included. I have only found links to source code for finding the inverse. This is only a small portion of the project and I am hoping to save time.
I do have the rest of my code written, but, have not started with the division/inverse yet. So, I will not include that code.
I suggest use an implementation of BLAS, they are generally very versatile language and platform wise and used for building LAPACK as well.
I'm in search for a free package that do most matrix/vector operations. I can write some basic functions myself but for advanced ones like computing eigenvalues and eigenvectors I would prefer robust code and I would like to know if such packages are freely available. If I understand correctly Ada 2005 have more matrix operations facilities but it has a function to calculate the eigenvalues for a symmetric and hermitian matrices only. I'll need a more general packages which can handle any kind of matrix.
An Ada95 matrix package (54KB tar.gz file) from Drexel Fusion Laboratory had the link: http://dflwww.ece.drexel.edu/research/ada/ but the page for this ink is non-existent today.
Thanks a lot...
I think that the Ada95 package you mean is here -- but it's only 35k, and it seems to have less functionality than the Ada2005 standard library does.
Not sure how this Ada95 binding to BLAS came to be in my browser cache! I see that for general matrix solving you need LAPACK too, I wonder whether the bindings already in GNAT will help? Package System.Generic_Real_LAPACK in file s-gerela.ad[bs]. The comments say
-- LAPACK Computational Routines
-- gerfs Refines the solution of a system of linear equations with
-- a general matrix and estimates its error
-- getrf Computes LU factorization of a general m-by-n matrix
-- getri Computes inverse of an LU-factored general matrix
-- square matrix, with multiple right-hand sides
-- getrs Solves a system of linear equations with an LU-factored
-- square matrix, with multiple right-hand sides
-- orgtr Generates the Float orthogonal matrix Q determined by sytrd
-- steqr Computes all eigenvalues and eigenvectors of a symmetric or
-- Hermitian matrix reduced to tridiagonal form (QR algorithm)
-- sterf Computes all eigenvalues of a Float symmetric
-- tridiagonal matrix using QR algorithm
-- sytrd Reduces a Float symmetric matrix to tridiagonal form
which I suspect is a small subset of the full library. Still, could act as a useful springboard for more extensive bindings.
As suggested in John Barnes Rationale for Ada 2005, Ada's Annex G: Numerics is not intended "as a substitute for professional libraries such as the renowned BLAS," but nothing precludes an implementation from using BLAS internally. As a concrete example, the GNAT compiler implements both G.3.1 Real Vectors and Matrices and G.3.2 Complex Vectors and Matrices using BLAS and LAPACK. To see the details, you can examine the relevant package bodies:
$ export ADA_INC = /your/path/to/adinclude
$ view $ADA_INC/$(gnatkr Ada.Numerics.Generic_Real_Arrays.adb)
$ view $ADA_INC/$(gnatkr Ada.Numerics.Generic_Complex_Arrays.adb)
The site at which this package was previously available has been migrated and the old content is now available at:
http://dfl.ece.drexel.edu/content/ada95-matrix-package