Precise matrix inversion in Q - algorithm

Given an invertible matrix M over the rationals Q, the inverse matrix M^(-1) is again a matrix over Q. Are their (efficient) libraries to compute the inverse precisely?
I am aware of high-performance linear algebra libraries such as BLAS/LAPACK, but these libraries are based on floating point arithmetic and are thus not suitable for computing precise (analytical) solutions.
Motivation: I want to compute the absorption probabilities of a large absorbing Markov chain using its fundamental matrix. I would like to do so precisely.
Details: By large, I mean a 1000x1000 matrix in the best case, and a several million dimensional matrix in the worst case. The further I can scale things the better. (I realize that the worst case is likely far out of reach.)

You can use the Eigen matrix library, which with little effort works on arbitrary scalar types. There is an example in the documentation how to use it with GMPs mpq_class: http://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
Of course, as #btilly noted, most of the time you should not calculate the inverse, but calculate a matrix decomposition and use that to solve equation systems. For rational numbers you can use any LU-decomposition, or if the matrix is symmetric, the LDLt decomposition. See here for a catalogue of decompositions.

Related

Fast solution of linear equations starting from approximate solution

In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.
Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?
I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < 0.001), rather than an exact solution (again, reusing the previous calculations).
You could use the Sherman–Morrison formula to incrementally update the inverse of matrix A.
To speed up the matrix multiplications, you could use a suitable matrix multiplication algorithm or a library tuned for high-performance computing. The classic matrix multiplication has complexity O(n³). Strassen-type algorithms have O(n^2.8) and better.
A similiar question without real answer was asked here.

'Squaring a polynomial' versus 'Multiplying the polynomial with itself using FFT'

The fastest known algorithm for polynomial multiplication is using Fast Fourier Transformation (FFT).
In a special case of multiplying a polynomial with itself, I am interested in knowing if any squaring algorithm performs better than FFT. I couldn't find any resource which deals with this aspect.
Number Theoretic Transform (NTT) is faster than FFT
Why? Because you using just integer modular arithmetics on some ring instead of floating point complex numbers while the properties stays the same (as NTT is sort of form of DFT ...). So if your polynomials are integer use NTT which is faster... if they are float you need to use FFT
FFT based squaring is faster than FFT based multiplying by itself
Why? Because you need just 1x NTT/FFT and 1x iNTT/iFFT while multiplying needs 2xNTT/FFT and 1x iNTT/iFFT so you spare one transformation ... the rest is the same
for small enough polynomials is squaring without FFT fastest
For more info see:
Fast bignum square computation
its not the same problem but very similar ... as bignum data words are similar to your polynomial coefficients. So most of it applies to your problem too

Find Independent Vectors (High Performance)

I'm in desperate need of a high performance algorithm to reduce a matrix to its independent vectors (row echelon form), aka find the basis vectors. I've seen the Bareiss algorithm and Row Reduction but they are all too slow, if anyone could recommend a faster implementation I'd be grateful!!! Happy to use TBB parallelisation.
Thanks!
What are you trying to do with the reduced echelon form? Do you just need the basis vectors to have them or are you trying to solve a system of equation? If you're solving a system of equations you can do an LU factorization and probably get faster calculation times. Otherwise gaussian elimination with partial pivoting is your fastest option.
Also do you know if your matrix is of a special form? Like upper or lower triangular for example. If it is then you can rewrite some of these algorithms to be faster based on the type of matrix that you have.

Inverting a sparse Matrix

I have a sparse, square, symmetric matrix with the following structure:
(Let's say the size of the matrix is N x N)
Here, the area under the blue stripes is the non-zero elements. Could someone tell me if there is a algorithm to invert this kind of matrix that is simple yet more efficient than Gaussian elimination and LU decomposition? Thank you in advance.
Cholesky factorization is faster, O(n²). Or some specialized multi-band solvers, if you know the number of non-zero off-diagonals.
You can also apply iterative methods, maybe with preconditioning, it depends on your purpose.
There are a lot of sparse solvers. This can easily be solved using libeigen. What solver you choose is really going to depend on the properties of the sparse matrix besides the structure. Hope this helps.

Efficient algorithm for finding largest eigenpair of small general complex matrix

I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n.
This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?
Rayleigh iteration has cubic convergence. You may want to implement also the power method and see how they compare, since you need LU or QR decomposition of your matrix.
http://en.wikipedia.org/wiki/Rayleigh_quotient_iteration
Following #rchilton's comment, you can apply this to A* A.
The idea of looking for the largest eigenpair is analogous to finding a large power of the matrix, as the lower frequency modes get damped out during the iteration. The Lanczos algorithm, is one of a few such algorithms that rely on the so-called Ritz eigenvectors during the decomposition. From Wikipedia:
The Lanczos algorithm is an iterative algorithm ... that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very large sparse matrices. In latent semantic indexing, for instance, matrices relating millions of documents to hundreds of thousands of terms must be reduced to singular-value form.
The technique works even if the system is not sparse, but if it is large and dense it has the advantage that it doesn't all have to be stored in memory at the same time.
How does it work?
The power method for finding the largest eigenvalue of a matrix A can be summarized by noting that if x_{0} is a random vector and x_{n+1}=A x_{n}, then in the large n limit, x_{n} / ||x_{n}|| approaches the normed eigenvector corresponding to the largest eigenvalue.
Non-square matrices?
Noting that your system is not a square matrix, I'm pretty sure that the SVD problem can be decomposed into separate linear algebra problems where the Lanczos algorithm would apply. A good place to ask such questions would be over at https://math.stackexchange.com/.

Resources