Gaussian elimination algorithm in transform and conquer has O(n3) complexity. Is there any technique that give more efficient complexity of this algorithm?
There are algorithms for matrix inversion with better asymptotic complexity, e.g., the Strassen algorithm with complexity O(n2.807) and the Coppersmith–Winograd algorithm with complexity O(n2.376).
(Note that the complexity of matrix multiplication and matrix inversion are the same)
It depends on which complexity you measure:
Number of multiplications: No, by changing the technique you can only worsen the complexity of Gaussian elimination.
Number of time steps: Yes, parallel implementation of the row operations reduces time complexity to O(n).
Related
I need to find the number of linearly independent columns in a square n*n matrix. What is the time complexity of this operation?
The regular Gauss elimination is O(n^3).
There are other potential solutions (e.g. iteratives or ones for sparse matrices) but they usually don't have a straightforward complexity.
I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.
Which is the best way to do polynomial powers? Is it by following the multinomial theorem (wikipedia) which takes O(?) or by FFT (fast Fourier transformation) and then inverse FFT with O((N*log(N))^2)?
FFT if you need to do it frequently, or on large polynomials. The naive multiplication algorithm is O(N^2), while FFT is O(N log(N)).
Here is a much better explanation with some neat applications: JeffE FFT
I have an equation the uses the inclusion-exclusion principle to calculate the probabilities of correlated events by removing the duplicate counting of intersections.
Now I want to know the complexity of this equation: What is the cost of computing the inclusion-exclusion principle in relation to the number of elements? is it exponential?
Well, the formula involves all subsets of the elements. There are 2^n subsets. Therefore it's at least exponential complexity.
As homework, I should implement integer multiplication on numbers of a 1000 digits using a divide and conquer approach that works below O(n). What algorithm should I look into?
Schönhage–Strassen algorithm is the one of the fastest multiplication algorithms known. It takes O(n log n log log n) time.
Fürer's algorithm is the fastest large number multiplication algorithm known so far and takes O(n*log n * 2O(log*n)) time.
I don't think any multiplication algorithm could take less than or even equal to O(n). That's simply not possible.
Take a look at the Karatsuba algorithm. It involves a recursion step which you can easily model with divide-and-conquer.