I read that the computational complexity of the general convolution algorithm is O(n^2), while by means of the FFT is O(n log n).
What about convolution in 2-D and 3-D?
Any reference?
As for two- and three-dimensional convolution and Fast Fourier Transform the complexity is following:
2D 3D
Convolution O(n^4) O(n^6)
FFT O(n^2 log^2 n) O(n^3 log^3 n)
Reference: Slides on Digital Image Processing, slide no. 34.
Related
I'm looking at Mattson's Stack Distance Algorithm for finding the hit ratio curve for cache. The paper stated that it runs in O(N*M). I'm trying to figure out if this is O(N) or O(N^2).
I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.
Which is the best way to do polynomial powers? Is it by following the multinomial theorem (wikipedia) which takes O(?) or by FFT (fast Fourier transformation) and then inverse FFT with O((N*log(N))^2)?
FFT if you need to do it frequently, or on large polynomials. The naive multiplication algorithm is O(N^2), while FFT is O(N log(N)).
Here is a much better explanation with some neat applications: JeffE FFT
Definition :
O(kM(n)) : - computational complexity of modular exponentiation
where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
How can I determine is this computational complexity polynomial complexity ?
In fact notation M(n) is that what confusing me most .
Think about the division algorithm.
Does the division algorithm have complexity O(n)? If so, then modular exponentiation is O(k n).
Does the division algorithm have complexity O(n^c) for some constant c? If so, then modular exponentiation is O(k n^c).
Does the division algorithm have complexity O(log n)? If so, then modular exponentiation is O(k log n).
Etc.
The complexity of modular exponentiation is polynomial in the length of the exponent and the length of the modulus even with regular long division, so it is also polynomial with a faster division algorithm. M(n) is the complexity of multiplying two n-digit/bit numbers together (see here).
Gaussian elimination algorithm in transform and conquer has O(n3) complexity. Is there any technique that give more efficient complexity of this algorithm?
There are algorithms for matrix inversion with better asymptotic complexity, e.g., the Strassen algorithm with complexity O(n2.807) and the Coppersmith–Winograd algorithm with complexity O(n2.376).
(Note that the complexity of matrix multiplication and matrix inversion are the same)
It depends on which complexity you measure:
Number of multiplications: No, by changing the technique you can only worsen the complexity of Gaussian elimination.
Number of time steps: Yes, parallel implementation of the row operations reduces time complexity to O(n).