Complexity of Pascals formula - complexity-theory

I have seen this problem and I couldn't solve it
the problem is finding the complexity of C(m,n) = C(m-1, n-1) + C(m, n-1) ( Pascal's formula )
Its an iterated formula but with two variable, I have no idea to solve this
I would b happy for your help... :)

If you consider the 2D representation of this formula you get to sum numbers that cover the "area" of a triangle when given its "height", so the complexity would be o(n^2) if calculated directly from the formula.
Idk if what I just said makes sense at all to you but you can also think of expressing the complexity of each iteration of the formula for a fixed n, which will give you linear complexity, multiplied by the linear complexity over n you should still get o(n^2)
This line of thought seems to match what they demonstrate here:
http://www.geeksforgeeks.org/pascal-triangle/

Related

Big-O of a summation of i^k

Say I need to calculate the time complexity of a function
16+26+36+...+n6. I am pretty sure this would be O(n7), but I only figure that because I know that Σi from i=0 to n is in O(n2). I cannot find a simple closed-version formula for a summation of ik. Can anyone provide more detail on how to actually calculate the time complexity?
Thanks!
An easy proof that it's Θ(n⁷) is to observe that:
1⁶+2⁶+3⁶+...+n⁶ <= n⁶+n⁶+...n⁶ = n⁷
(replacing all numbers with n makes the sum larger).
and
1⁶+2⁶+3⁶+...+n⁶ >= (n/2+1)⁶+...+n⁶ >= (n/2)⁶+(n/2)⁶+...+(n/2)⁶ = n⁷/2⁷
(in the first step, we discard the terms less or equal than n/2, and in the second step we replace all numbers with n/2. Both steps reduce the sum). (Note: I've assumed n is even, but you can extend to odd n with a bit of minor fiddling around).
Thus 1⁶+2⁶+3⁶+...+n⁶ is bounded above and below by a constant factor of n⁷ and so by definition is Θ(n⁷).
As David Eisenstat suggests in the comments, another proof is to consider the (continuous) graphs y=x⁶ and y=(x+1)⁶ from 0 to n. The area under these curves bound the sum below and above, and are readily calculated via integrals: the first is n⁷/7 and the second is ((n+1)⁷-1)/7. This shows that the sum is n⁷/7 + o(n⁷)

Divide-and-conquer: Polynomial multiplication time complexity

When I was learning the divide and conquer approach, I came to this example (https://www.geeksforgeeks.org/multiply-two-polynomials-2/) about polynomial multiplication. I cannot understand why the time required to add four results (subproblems) is Theta(n). I thought the addition only takes constant time. Why linear time? Thanks in advance!
You're right. But here "to add all results" means the sum of multiplies of each power of x together to find the final result which is a polynomial, i.e., the sum of multiplies of x, x^2, ..., x^n. In this sense, the sum of the four polynomials with power of O(n), takes O(n).

Finding the best BigO notation with two strong terms

I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.

Time complexity to find path from bottom-left to top-right block

I am confused in calculating the time complexity of one problem, please help me in that.
Problem statement:-
2-D matrix is given, you are at bottom-left block, and you have to go to top-right block. One constraint is given, from every point to can move only step either upwards or right direction.
How many such ways are there, prove mathematically ?
Time complexity is in polynomial or exponential form ?
My effort :-
If the matrix is of N*N size, then you have to move exactly 2N steps, out of which N steps is R, and remaining N steps is U.
So, if we simplify this, its a permutation and combination problem, A string is given that contains only two letters R and U, how can you arrange that ?
Answer is
( (2N) C (N) )*( (N) C (N) )*2
Question
Is my above logic correct? If not, please correct me.
Above formula is polynomial or exponential ?
Your idea is correct. However answer is a bit inaccurate.
The string only has letters R and U but its length is 2N-2.
The problem is to arrange 2N-2 objects such that n-1 are of 1 type and n-1 objects are of other type.
The number of possibilities = factorial(2N-2)/( factorial(n-1) * factorial(n-1) )
If you consider product of 2 numbers as O(k), then calculating the above shall have a time complexity of O(N*k).
To get an idea of the order of multiplication for various algorithms, you can visit http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
EDIT:
This number appears in the expansion of binomial coeffecients of 2^(2n-2).
Hence we can safely say that "counting up to this number" is bounded by an exponential rate of growth, not factorial.

Time complexity of matrix chain multiplication

This is a solved problem in "Introduction to Algorithms", by Cormen, et. al.
Ch. 15, Section 15.2: Matrix Chain Multiplication. Pg. 373.
The objective is to parenthesize the matrix chain product A1.A2.A3.....An such that there are minimum number of scalar multiplications.
For Ai.Ai+1....Ak.Ak+1.....Aj,
Matrix Ai has dimension pi-1xpi
The author comes up with the recursion
m[i,j] = 0 if i=j
= min {m[i,k] + m[k+1] + pi-1.pk.pj} where i goes from k to (j-1) if i<j
(m[i,j] is the minimum number of scalar multiplications required for the product Ai....Aj)
So far I understood, but then the time complexity he says is O(n^3).
When I look at the pseudo-code, there are 3 for loops, so its correct. But I don't understand this intuitively by looking at the recursion.
Can anyone please help?
The final solution is to calculate m[0,N]. But all m[i,j] values need to be calculated before m[0,N] can be calculated. This makes it O(N^2).
From the recursion formula you can see each m[i,j] calculation needs O(N) complexity.
So O(N^3) for the complete solution.
There could be O(n^2) unique sub-problems to any MCM given problem and for every such sub-problem there could be O(n) splits possible.
So it is O(n^3).

Resources