Which is the best way to do polynomial powers? Is it by following the multinomial theorem (wikipedia) which takes O(?) or by FFT (fast Fourier transformation) and then inverse FFT with O((N*log(N))^2)?
FFT if you need to do it frequently, or on large polynomials. The naive multiplication algorithm is O(N^2), while FFT is O(N log(N)).
Here is a much better explanation with some neat applications: JeffE FFT
Related
I am aware that we can use the Fast Fourier Transform to multiply two polynomials of degree n in O(n log n) time. This is a big savings from the brute force approach in O(n^2) time. Is it possible to generalize this result to two polynomials of different degree?
Clearly, it can be done in O(n log n) where n is the larger number, but I'm looking for an answer that depends on both n and m.
I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.
I read that the computational complexity of the general convolution algorithm is O(n^2), while by means of the FFT is O(n log n).
What about convolution in 2-D and 3-D?
Any reference?
As for two- and three-dimensional convolution and Fast Fourier Transform the complexity is following:
2D 3D
Convolution O(n^4) O(n^6)
FFT O(n^2 log^2 n) O(n^3 log^3 n)
Reference: Slides on Digital Image Processing, slide no. 34.
Gaussian elimination algorithm in transform and conquer has O(n3) complexity. Is there any technique that give more efficient complexity of this algorithm?
There are algorithms for matrix inversion with better asymptotic complexity, e.g., the Strassen algorithm with complexity O(n2.807) and the Coppersmith–Winograd algorithm with complexity O(n2.376).
(Note that the complexity of matrix multiplication and matrix inversion are the same)
It depends on which complexity you measure:
Number of multiplications: No, by changing the technique you can only worsen the complexity of Gaussian elimination.
Number of time steps: Yes, parallel implementation of the row operations reduces time complexity to O(n).
As homework, I should implement integer multiplication on numbers of a 1000 digits using a divide and conquer approach that works below O(n). What algorithm should I look into?
Schönhage–Strassen algorithm is the one of the fastest multiplication algorithms known. It takes O(n log n log log n) time.
Fürer's algorithm is the fastest large number multiplication algorithm known so far and takes O(n*log n * 2O(log*n)) time.
I don't think any multiplication algorithm could take less than or even equal to O(n). That's simply not possible.
Take a look at the Karatsuba algorithm. It involves a recursion step which you can easily model with divide-and-conquer.