This is a solved problem in "Introduction to Algorithms", by Cormen, et. al.
Ch. 15, Section 15.2: Matrix Chain Multiplication. Pg. 373.
The objective is to parenthesize the matrix chain product A1.A2.A3.....An such that there are minimum number of scalar multiplications.
For Ai.Ai+1....Ak.Ak+1.....Aj,
Matrix Ai has dimension pi-1xpi
The author comes up with the recursion
m[i,j] = 0 if i=j
= min {m[i,k] + m[k+1] + pi-1.pk.pj} where i goes from k to (j-1) if i<j
(m[i,j] is the minimum number of scalar multiplications required for the product Ai....Aj)
So far I understood, but then the time complexity he says is O(n^3).
When I look at the pseudo-code, there are 3 for loops, so its correct. But I don't understand this intuitively by looking at the recursion.
Can anyone please help?
The final solution is to calculate m[0,N]. But all m[i,j] values need to be calculated before m[0,N] can be calculated. This makes it O(N^2).
From the recursion formula you can see each m[i,j] calculation needs O(N) complexity.
So O(N^3) for the complete solution.
There could be O(n^2) unique sub-problems to any MCM given problem and for every such sub-problem there could be O(n) splits possible.
So it is O(n^3).
Related
Say I need to calculate the time complexity of a function
16+26+36+...+n6. I am pretty sure this would be O(n7), but I only figure that because I know that Σi from i=0 to n is in O(n2). I cannot find a simple closed-version formula for a summation of ik. Can anyone provide more detail on how to actually calculate the time complexity?
Thanks!
An easy proof that it's Θ(n⁷) is to observe that:
1⁶+2⁶+3⁶+...+n⁶ <= n⁶+n⁶+...n⁶ = n⁷
(replacing all numbers with n makes the sum larger).
and
1⁶+2⁶+3⁶+...+n⁶ >= (n/2+1)⁶+...+n⁶ >= (n/2)⁶+(n/2)⁶+...+(n/2)⁶ = n⁷/2⁷
(in the first step, we discard the terms less or equal than n/2, and in the second step we replace all numbers with n/2. Both steps reduce the sum). (Note: I've assumed n is even, but you can extend to odd n with a bit of minor fiddling around).
Thus 1⁶+2⁶+3⁶+...+n⁶ is bounded above and below by a constant factor of n⁷ and so by definition is Θ(n⁷).
As David Eisenstat suggests in the comments, another proof is to consider the (continuous) graphs y=x⁶ and y=(x+1)⁶ from 0 to n. The area under these curves bound the sum below and above, and are readily calculated via integrals: the first is n⁷/7 and the second is ((n+1)⁷-1)/7. This shows that the sum is n⁷/7 + o(n⁷)
When I was learning the divide and conquer approach, I came to this example (https://www.geeksforgeeks.org/multiply-two-polynomials-2/) about polynomial multiplication. I cannot understand why the time required to add four results (subproblems) is Theta(n). I thought the addition only takes constant time. Why linear time? Thanks in advance!
You're right. But here "to add all results" means the sum of multiplies of each power of x together to find the final result which is a polynomial, i.e., the sum of multiplies of x, x^2, ..., x^n. In this sense, the sum of the four polynomials with power of O(n), takes O(n).
I came across this problem while studying which asks to consider a data structure where a sequence of n operations are performed. If the kth operation has a cost of k if it is a perfect square and a cost of 1 otherwise, what is total cost of the operations and what is the amortized cost of each operation.
I am having a bit of difficulty coming up with a summation formula that provides the definition of a perfect square where I can see what the sum yields. Any thoughts/advice?
The sum of i^2 from 1 to n can be calculated as n(n+1)(2n+1)/6. I found it in a math book, can't find a simple formula online. But check out http://mathworld.wolfram.com/Sum.html, formula (6).
To calculate this sum, let n be the square root of k, rounded down. The formula is proportional to n^3 which is sqrt(k)^3 = k^(3/2). This gives an amortized time of O(k^(3/2)).
I have seen this problem and I couldn't solve it
the problem is finding the complexity of C(m,n) = C(m-1, n-1) + C(m, n-1) ( Pascal's formula )
Its an iterated formula but with two variable, I have no idea to solve this
I would b happy for your help... :)
If you consider the 2D representation of this formula you get to sum numbers that cover the "area" of a triangle when given its "height", so the complexity would be o(n^2) if calculated directly from the formula.
Idk if what I just said makes sense at all to you but you can also think of expressing the complexity of each iteration of the formula for a fixed n, which will give you linear complexity, multiplied by the linear complexity over n you should still get o(n^2)
This line of thought seems to match what they demonstrate here:
http://www.geeksforgeeks.org/pascal-triangle/
Input: nxn matrix of postitive/negative numbers and k.
Output: submatrix with maximum sum of its elements divided by its number of elements that has at least k elements.
Is there any algorithm better than O(n^4) for this problem?
An FFT-based divide-and-conquer approach to this problem:
https://github.com/thearn/maximum-submatrix-sum
It's not as efficient as Kadane's, (O(N^3) vs. O(N^3 log N)) but does give a different take on solution construction.
There is a O(n^3) 2-d kadane algorithm for finding the maximum sum submatrix (i.e. subrectangle) in an nxn matrix. (You can find posts on SO about it, or read online). Once you understand how the algorithm works, it is clear that you can get an O(n^3) time solution for your problem if you can solve the problem of finding a maximum average subinterval of length at least m in a 1-d array of n numbers in O(n) time. This is indeed possible, see the paper cs.slu.edu/~goldwasser/publications/DensityPreprint.pdf
Thus there is an O(n^3) time solution for your problem.