I am confused in calculating the time complexity of one problem, please help me in that.
Problem statement:-
2-D matrix is given, you are at bottom-left block, and you have to go to top-right block. One constraint is given, from every point to can move only step either upwards or right direction.
How many such ways are there, prove mathematically ?
Time complexity is in polynomial or exponential form ?
My effort :-
If the matrix is of N*N size, then you have to move exactly 2N steps, out of which N steps is R, and remaining N steps is U.
So, if we simplify this, its a permutation and combination problem, A string is given that contains only two letters R and U, how can you arrange that ?
Answer is
( (2N) C (N) )*( (N) C (N) )*2
Question
Is my above logic correct? If not, please correct me.
Above formula is polynomial or exponential ?
Your idea is correct. However answer is a bit inaccurate.
The string only has letters R and U but its length is 2N-2.
The problem is to arrange 2N-2 objects such that n-1 are of 1 type and n-1 objects are of other type.
The number of possibilities = factorial(2N-2)/( factorial(n-1) * factorial(n-1) )
If you consider product of 2 numbers as O(k), then calculating the above shall have a time complexity of O(N*k).
To get an idea of the order of multiplication for various algorithms, you can visit http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
EDIT:
This number appears in the expansion of binomial coeffecients of 2^(2n-2).
Hence we can safely say that "counting up to this number" is bounded by an exponential rate of growth, not factorial.
Related
Say I need to calculate the time complexity of a function
16+26+36+...+n6. I am pretty sure this would be O(n7), but I only figure that because I know that Σi from i=0 to n is in O(n2). I cannot find a simple closed-version formula for a summation of ik. Can anyone provide more detail on how to actually calculate the time complexity?
Thanks!
An easy proof that it's Θ(n⁷) is to observe that:
1⁶+2⁶+3⁶+...+n⁶ <= n⁶+n⁶+...n⁶ = n⁷
(replacing all numbers with n makes the sum larger).
and
1⁶+2⁶+3⁶+...+n⁶ >= (n/2+1)⁶+...+n⁶ >= (n/2)⁶+(n/2)⁶+...+(n/2)⁶ = n⁷/2⁷
(in the first step, we discard the terms less or equal than n/2, and in the second step we replace all numbers with n/2. Both steps reduce the sum). (Note: I've assumed n is even, but you can extend to odd n with a bit of minor fiddling around).
Thus 1⁶+2⁶+3⁶+...+n⁶ is bounded above and below by a constant factor of n⁷ and so by definition is Θ(n⁷).
As David Eisenstat suggests in the comments, another proof is to consider the (continuous) graphs y=x⁶ and y=(x+1)⁶ from 0 to n. The area under these curves bound the sum below and above, and are readily calculated via integrals: the first is n⁷/7 and the second is ((n+1)⁷-1)/7. This shows that the sum is n⁷/7 + o(n⁷)
I feel stupid for asking this question, but...
For the "closest pair of points" problem (see this if unfamiliar with it), why is the worst-case running time of the brute-force algorithm O(n^2)?
If say n = 4, then there would only be 12 possible pair of points to compare in the search space, if we also consider comparing two points from either direction. If we don't compare two points twice, then it's going to be 6.
O(n^2) doesn't add up to me.
The actual number of comparisons is:
, or .
But, in big-O notation, you are only concerned about the dominant term. At very large values of , the term becomes less important, as does the coefficient on the term. So, we just say it's .
Big-O notation isn't meant to give you the exact formula for the time taken or number of steps. It only gives you the order of the complexity/time so you can get a sense of how it scales for large inputs.
Applying brute force, we are forced to check all the possible pairs.Assuming N points,for each point there are N-1 other points for which we need to calculate the distance. So total possible distances calculated = N points * N-1 other points. But in process we double counted distances. Distance between A to B remains whether A to B or B to A is calculated. Hence N*(N-1)/2. Hence O(N^2).
In big-O notation, you can factor out multiplied constants, so:
O(k*(n^2)) = O(n^2)
The idea is that the constant (1/2 in the OP example, since distance comparison is reflective) doesn't really tell us anything new about the complexity. It still gets bigger with the square of the input.
In the brute-force version of the algorithm you compare all possible pairs of points. For each of n points you have (n - 1) other points to compare and if we take every pair once we end up with (n * (n - 1)) / 2 comparisons. The pessimistic complexity of O(n^2) means that the number of operations is bound by k * n^2 for some constant k. Big O notation can't tell you the exact number of operations but a function to which it is proportional when the size of data (n) increases.
Is there a decision problem with a time complexity of Ө(n²)?
In other words, I'm looking for a decision problem for which the best known solution has been proven to have a lower bound of N².
I thought about searching for the biggest number in matrix but the problem is that matrix is an input of O(n²) so the solution is linear.
It doesn't need to be known problem, a hypothetical one would suffice as well.
Does a close pair exist?
In any "difficult" metric space, given n points, does a pair exist in distance less than r, where r is an input parameter?
Intuitively proof:
Given that r is an input parameter, you have to search every point.
For a point, you have compute the distance to every other point, that's Θ(n).
For n points, you have n*Θ(n) = Ө(n²).
Time complexity: Ө(n²)
In Closest Pair algorithm, it is said that presorting points according to x and y coordinates can help decrease time complexibility from O(nlog^2n) to O(nlogn), but how can that happen? I think presort also requires O(nlogn) time rather than O(n), so the equation is still T(n)=2T(n/2)+O(nlogn).
Can anyone show how to complete presort in details to achieve O(n)? Or do I have any misunderstandings about it?
Not sure to what you're referring to as "presort", but the algorithm is O(n log(n)), according to these steps:
First, sort according to the x coordinate.
Recursively, divide into two, similarly-sized sets, divided by an xm value.
a. solve for each of the left and right subsets to xm.
b. for each of the points on the left, find the closest points in a bounded rectangle containing points to the right (see details in link above); same for the points on the right.
c. return the minimum of the smallest distances found in b.
Step 1 is O(n log(n). Step 2 is given by T(n) = 2 T(n / 2) + Θ(n).
Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.