I apologize for the vagueness of this question, but I'm trying to ascertain a way to perform divide and conquer multiplication of rectangular matrices A and B such that A = n x m and B = m x p
I've done a bit of reading and Strassen's method seems promising, but I can't determine how I would use this algorithm on rectangular matrices. I've seen some people refer to "padding" with zeros to make both matrices square and then "unpadding" the result, but I'm not clear on what the unpadding stage would entail.
Thank you for your advice!
The result matrix is going to contain zeros on all items that were "added" to operand matrices. To get back to your rectangular result, you would just crop the result, i.e. take upper left corner of the result matrix based on dimensions of operands.
However, padding by itself seems to be wise only in cases where n, m and p are very close. When these are disproportionate, you are going to lot of zero matrix multiplication.
For example if n = 2m = p, Strassen's algorithm is going to divide multiplication into 7 multiplications of m-size matrices. However, 4 of these multiplications would involve zero matrices and are not necessary.
I think there are two ways how to improve the performance:
Use padding and remember which part of matrix is padded. Then for each multiplication step check whether you are not multiplying by a zero matrix. If you do, the result would also be a zero matrix, no need to compute that. This would remove most of the cost involved with padding.
Do not use padding. NonSquare_Strassen: Divide the rectangular matrices into square regions and a remainders. Run vanilla Strassen on square regions. Run NonSquareStrassen again on the remainders. Afterwards, combine these results. This algorithm will be most likely faster than the first, but not entirely easy to implement. However, the logic will be quite similar to Strassen's algorithm for square matrices.
For the sake of simplicity I would choose the first option.
Note:
Remember that you can use Strassen's approach also for rectangular matrices and that below certain matrix size, O(n^2) cost of additional matrix additions becomes more significant and it's better to finish small sizes using normal cubic multiplication. This means that the Strassen's approach is still quite easy to implement for non-square matrices. The above expects that you have the algorithm for square matrices already implemented.
Related
In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.
Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?
I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < 0.001), rather than an exact solution (again, reusing the previous calculations).
You could use the Sherman–Morrison formula to incrementally update the inverse of matrix A.
To speed up the matrix multiplications, you could use a suitable matrix multiplication algorithm or a library tuned for high-performance computing. The classic matrix multiplication has complexity O(n³). Strassen-type algorithms have O(n^2.8) and better.
A similiar question without real answer was asked here.
Given an invertible matrix M over the rationals Q, the inverse matrix M^(-1) is again a matrix over Q. Are their (efficient) libraries to compute the inverse precisely?
I am aware of high-performance linear algebra libraries such as BLAS/LAPACK, but these libraries are based on floating point arithmetic and are thus not suitable for computing precise (analytical) solutions.
Motivation: I want to compute the absorption probabilities of a large absorbing Markov chain using its fundamental matrix. I would like to do so precisely.
Details: By large, I mean a 1000x1000 matrix in the best case, and a several million dimensional matrix in the worst case. The further I can scale things the better. (I realize that the worst case is likely far out of reach.)
You can use the Eigen matrix library, which with little effort works on arbitrary scalar types. There is an example in the documentation how to use it with GMPs mpq_class: http://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
Of course, as #btilly noted, most of the time you should not calculate the inverse, but calculate a matrix decomposition and use that to solve equation systems. For rational numbers you can use any LU-decomposition, or if the matrix is symmetric, the LDLt decomposition. See here for a catalogue of decompositions.
Algorithm requirements
Input is an arbitrary square matrix M of size N×N, which just fits in memory.
The algorithm's output must be true if M[i,j] = M[j,i] for all j≠i, false otherwise.
Obvious solutions
Check if the transpose equals the matrix itself (MT=M). Easiest to program in many environments, but (usually) consumes twice the memory and requires N² comparisons worst case. Therefore, this is O(N²) and has high peak memory.
Check if the lower triangular part equals the upper triangular part. Of course, the algorithm returns on the first inequality found. This would make the worst case (worst case being, the matrix is indeed symmetric) require N²/2 - N comparisons, since the diagonal does not need to be checked. So although it is better than option 1, this is still O(N²).
Question
Although it's hard to see how it would be possible (the N² elements will all have to be compared somehow), is there an algorithm doing this check that is better than O(N²)?
Or, provided there is a proof of non-existence of such an algorithm: how to implement this most efficiently for a multi-core CPU (Intel or AMD) taking into account things like cache-friendliness, optimal branch prediction, other compiler-specific specializations, etc.?
This question stems mostly from academic interest, although I imagine a practical use could be to determine what solver to use if the matrix describes a linear system AX=b...
Since you will have to examine all the elements except the diagonal, the complexity IMO can't be better than O (n^2).
For a dense matrix, the answer is a definite "no", because any uninspected (non-diagonal) elements could be different from their transposed counterparts.
For standard representations of a sparse matrix, the same reasoning indicates that you can't generally do better than the input size.
However, the same reasoning doesn't apply to arbitrary matrix representations. For example, you could store sparse representations of the symmetric and antisymmetric components of your matrix, which can easily be checked for symmetry in O(1) time by checking if antisymmetric element has any components at all...
I think you can take a probabilistic approach here.
I think it's not a chance/coincidence that x randomly picked lower coordinate elements will match to their upper triangular counter part. The chance is very high that the matrix is indeed symmetric.
So instead of going through all the ½n² - n elements you can check p random coordinates and tell if the matrix is symmetric with confidence:
p / (½n² - n)
you can then decide a threshold above which you believe that the matrix must be a symmetric matrix.
I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n.
This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?
Rayleigh iteration has cubic convergence. You may want to implement also the power method and see how they compare, since you need LU or QR decomposition of your matrix.
http://en.wikipedia.org/wiki/Rayleigh_quotient_iteration
Following #rchilton's comment, you can apply this to A* A.
The idea of looking for the largest eigenpair is analogous to finding a large power of the matrix, as the lower frequency modes get damped out during the iteration. The Lanczos algorithm, is one of a few such algorithms that rely on the so-called Ritz eigenvectors during the decomposition. From Wikipedia:
The Lanczos algorithm is an iterative algorithm ... that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very large sparse matrices. In latent semantic indexing, for instance, matrices relating millions of documents to hundreds of thousands of terms must be reduced to singular-value form.
The technique works even if the system is not sparse, but if it is large and dense it has the advantage that it doesn't all have to be stored in memory at the same time.
How does it work?
The power method for finding the largest eigenvalue of a matrix A can be summarized by noting that if x_{0} is a random vector and x_{n+1}=A x_{n}, then in the large n limit, x_{n} / ||x_{n}|| approaches the normed eigenvector corresponding to the largest eigenvalue.
Non-square matrices?
Noting that your system is not a square matrix, I'm pretty sure that the SVD problem can be decomposed into separate linear algebra problems where the Lanczos algorithm would apply. A good place to ask such questions would be over at https://math.stackexchange.com/.
I have found some mentions in another question of matrix addition being a quadratic operation. But I think it is linear.
If I double the size of a matrix, I need to calculate double the additions, not quadruple.
The main diverging point seems to be what is the size of the problem. To me, it's the number of elements in the matrix. Others think it is the number of columns or lines, hence the O(n^2) complexity.
Another problem I have with seeing it as a quadratic operation is that that means adding 3-dimensional matrices is cubic, and adding 4-dimensional matrices is O(n^4), etc, even though all of these problems can be reduced to the problem of adding two vectors, which has an obviously linear solution.
Am I right or wrong? If wrong, why?
As you already noted, it depends on your definition of the problem size: is it the total number of elements, or the width/height of the matrix. Which ever is correct actually depends on the larger problem of which the matrix addition is part of.
NB: on some hardware (GPU, vector machines, etc) the addition might run faster than expected (even though complexity is still the same, see discussion below), because the hardware can perform multiple additions in one step. For a bounded problem size (like n < 3) it might even be one step.
It's O(M*N) for a 2-dimensional matrix with M rows and N columns.
Or you can say it's O(L) where L is the total number of elements.
Usually the problem is defined using square matrices "of size N", meaning NxN. By that definition, matrix addition is an O(N^2) since you must visit each of the NxN elements exactly once.
By that same definition, matrix multiplication (using square NxN matrices) is O(N^3) because you need to visit N elements in each of the source matrices to compute each of the NxN elements in the product matrix.
Generally, all matrix operations have a lower bound of O(N^2) simply because you must visit each element at least once to compute anything involving the whole matrix.
think of the general case implementation:
for 1 : n
for 1 : m
c[i][j] = a[i][j] + b[i][j]
if we take the simple square matrix, that is n x n additions