Best generalised eigenvalue solver symmetric real matrices Julia - performance

I need to solve generalised eigenvalue problems that stem from solid mechanics (mass/stiffness matrices, (K-l*M)x=0). The systems dimension range from 500k to 15M degrees of freedom. Matrices are real and symmetric.
I am currenty using Arpack.jl to solve the problem, but memory consumption simply blows up with the number of degrees of freedom and time performances are OK. Do you know if there are good Julia alternatives for systems of such dimensions (e.g. PCG Lanzcos algorithm o Supernode method)? Has anyone compared Arpack.jl with KrylovKit.jl for this type of problems?

Related

Find Independent Vectors (High Performance)

I'm in desperate need of a high performance algorithm to reduce a matrix to its independent vectors (row echelon form), aka find the basis vectors. I've seen the Bareiss algorithm and Row Reduction but they are all too slow, if anyone could recommend a faster implementation I'd be grateful!!! Happy to use TBB parallelisation.
Thanks!
What are you trying to do with the reduced echelon form? Do you just need the basis vectors to have them or are you trying to solve a system of equation? If you're solving a system of equations you can do an LU factorization and probably get faster calculation times. Otherwise gaussian elimination with partial pivoting is your fastest option.
Also do you know if your matrix is of a special form? Like upper or lower triangular for example. If it is then you can rewrite some of these algorithms to be faster based on the type of matrix that you have.

Precise matrix inversion in Q

Given an invertible matrix M over the rationals Q, the inverse matrix M^(-1) is again a matrix over Q. Are their (efficient) libraries to compute the inverse precisely?
I am aware of high-performance linear algebra libraries such as BLAS/LAPACK, but these libraries are based on floating point arithmetic and are thus not suitable for computing precise (analytical) solutions.
Motivation: I want to compute the absorption probabilities of a large absorbing Markov chain using its fundamental matrix. I would like to do so precisely.
Details: By large, I mean a 1000x1000 matrix in the best case, and a several million dimensional matrix in the worst case. The further I can scale things the better. (I realize that the worst case is likely far out of reach.)
You can use the Eigen matrix library, which with little effort works on arbitrary scalar types. There is an example in the documentation how to use it with GMPs mpq_class: http://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
Of course, as #btilly noted, most of the time you should not calculate the inverse, but calculate a matrix decomposition and use that to solve equation systems. For rational numbers you can use any LU-decomposition, or if the matrix is symmetric, the LDLt decomposition. See here for a catalogue of decompositions.

Do linear algebra packages accept matrix elements that are infinite or other number-like input?

Background
I'm doing research about stability analysis of some dynamical system. In the literature, others have used analytical methods to query the signs of the eigenvalues of the linearized system matrix. My approach is to use a numerical eigenvalue solver. In some cases, input to the stability analysis consists of some coefficients that become infinite. In the analytical approach, this is tackled by taking the limit of the resulting stability criteria to infinity. "Taking the limit" is however not possible in a numerical approach, so I have reformulated the problem to avoid infinite coefficients in my numerical implementation.
Question
Now my question should be clear. Would a linear algebra package allow to use infinite coefficients? My direct application only needs eigenvalue solvers, but I don't want to narrow down to that. Any answer regarding infinite coefficients as input to linear algebra algorithms (matrix-solve, eigenvalue problem, singular value decomposition, LU etc.) is welcome.

Efficient algorithm for finding largest eigenpair of small general complex matrix

I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n.
This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?
Rayleigh iteration has cubic convergence. You may want to implement also the power method and see how they compare, since you need LU or QR decomposition of your matrix.
http://en.wikipedia.org/wiki/Rayleigh_quotient_iteration
Following #rchilton's comment, you can apply this to A* A.
The idea of looking for the largest eigenpair is analogous to finding a large power of the matrix, as the lower frequency modes get damped out during the iteration. The Lanczos algorithm, is one of a few such algorithms that rely on the so-called Ritz eigenvectors during the decomposition. From Wikipedia:
The Lanczos algorithm is an iterative algorithm ... that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very large sparse matrices. In latent semantic indexing, for instance, matrices relating millions of documents to hundreds of thousands of terms must be reduced to singular-value form.
The technique works even if the system is not sparse, but if it is large and dense it has the advantage that it doesn't all have to be stored in memory at the same time.
How does it work?
The power method for finding the largest eigenvalue of a matrix A can be summarized by noting that if x_{0} is a random vector and x_{n+1}=A x_{n}, then in the large n limit, x_{n} / ||x_{n}|| approaches the normed eigenvector corresponding to the largest eigenvalue.
Non-square matrices?
Noting that your system is not a square matrix, I'm pretty sure that the SVD problem can be decomposed into separate linear algebra problems where the Lanczos algorithm would apply. A good place to ask such questions would be over at https://math.stackexchange.com/.

How is linear algebra used in algorithms?

Several of my peers have mentioned that "linear algebra" is very important when studying algorithms. I've studied a variety of algorithms and taken a few linear algebra courses and I don't see the connection. So how is linear algebra used in algorithms?
For example what interesting things can one with a connectivity matrix for a graph?
Three concrete examples:
Linear algebra is the fundament of modern 3d graphics. This is essentially the same thing that you've learned in school. The data is kept in a 3d space that is projected in a 2d surface, which is what you see on your screen.
Most search engines are based on linear algebra. The idea is to represent each document as a vector in a hyper space and see how the vector relates to each other in this space. This is used by the lucene project, amongst others. See VSM.
Some modern compression algorithms such as the one used by the ogg vorbis format is based on linear algebra, or more specifically a method called Vector Quantization.
Basically it comes down to the fact that linear algebra is a very powerful method when dealing with multiple variables, and there's enormous benefits for using this as a theoretical foundation when designing algorithms. In many cases this foundation isn't as appearent as you might think, but that doesn't mean that it isn't there. It's quite possible that you've already implemented algorithms which would have been incredibly hard to derive without linalg.
A cryptographer would probably tell you that a grasp of number theory is very important when studying algorithms. And he'd be right--for his particular field. Statistics has its uses too--skip lists, hash tables, etc. The usefulness of graph theory is even more obvious.
There's no inherent link between linear algebra and algorithms; there's an inherent link between mathematics and algorithms.
Linear algebra is a field with many applications, and the algorithms that draw on it therefore have many applications as well. You've not wasted your time studying it.
Ha, I can't resist putting this here (even though the other answers are good):
The $25 billion dollar eigenvector.
I'm not going to lie... I never even read the whole thing... maybe I will now :-).
I don't know if I'd phrase it as 'linear algebra is very important when studying algorithms". I'd almost put it the other way around. Many, many, many, real world problems end up requiring you to solve a set of linear equations. If you end up having to tackle one of those problems you are going to need to know about some of the many algorithms for dealing with linear equations. Many of those algorithms were developed when computers was a job title, not a machine. Consider gaussian elimination and the various matrix decomposition algorithms for example. There is a lot of very sophisticated theory on how to solve those problems for very large matrices for example.
Most common methods in machine learning end up having an optimization step which requires solving a set of simultaneous equations. If you don't know linear algebra you'll be completely lost.
Many signal processing algorithms are based on matrix operations, e.g. Fourier transform, Laplace transform, ...
Optimization problems can often be reduced to solving linear equation systems.
Linear algebra is also important in many algorithms in computer algebra, as you might have guessed. For example, if you can reduce a problem to saying that a polynomial is zero, where the coefficients of the polynomial are linear in the variables x1, …, xn, then you can solve for what values of x1, …, xn make the polynomial equal to 0 by equating the coefficient of each x^n term to 0 and solving the linear system. This is called the method of undetermined coefficients, and is used for example in computing partial fraction decompositions or in integrating rational functions.
For the graph theory, the coolest thing about an adjacency matrix is that if you take the nth power of an adjacency Matrix for an unweighted graph (each entry is either 0 or 1), M^n, then each entry i,j will be the number of paths from vertex i to vertex j of length n. And if that isn't just cool, then I don't know what is.
All of the answers here are good examples of linear algebra in algorithms.
As a meta answer, I will add that you might be using linear algebra in your algorithms without knowing it. Compilers that optimize with SSE(2) typically vectorize your code by having many data values manipulated in parallel. This is essentially elemental LA.
It depends what type of "algorithms".
Some examples:
Machine-Learning/Statistics algorithms: Linear Regressions (least-squares, ridge, lasso).
Lossy compression of signals and other processing (face recognition, etc). See Eigenfaces
For example what interesting things can one with a connectivity matrix for a graph?
A lot of algebraic properties of the matrix are invariant under permutations of vertices (for example abs(determinant)), so if two graphs are isomorphic, their values will be equal.
This is a source for good heuristics for determining whether two graphs
are not isomorphic, since of course equality does not guarantee existance of isomorphism.
Check algebraic graph theory for a lot of other interesting techniques.

Resources